00:00:00.000 Started by upstream project "autotest-per-patch" build number 132542 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.006 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.007 The recommended git tool is: git 00:00:00.007 using credential 00000000-0000-0000-0000-000000000002 00:00:00.009 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.024 Fetching changes from the remote Git repository 00:00:00.028 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.044 Using shallow fetch with depth 1 00:00:00.044 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.044 > git --version # timeout=10 00:00:00.083 > git --version # 'git version 2.39.2' 00:00:00.083 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.119 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.119 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.709 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.719 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.731 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.731 > git config core.sparsecheckout # timeout=10 00:00:02.744 > git read-tree -mu HEAD # timeout=10 00:00:02.759 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.780 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.780 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.870 [Pipeline] Start of Pipeline 00:00:02.884 [Pipeline] library 00:00:02.886 Loading library shm_lib@master 00:00:02.886 Library shm_lib@master is cached. Copying from home. 00:00:02.904 [Pipeline] node 00:00:02.912 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_3 00:00:02.914 [Pipeline] { 00:00:02.922 [Pipeline] catchError 00:00:02.923 [Pipeline] { 00:00:02.935 [Pipeline] wrap 00:00:02.944 [Pipeline] { 00:00:02.952 [Pipeline] stage 00:00:02.953 [Pipeline] { (Prologue) 00:00:02.971 [Pipeline] echo 00:00:02.972 Node: VM-host-SM17 00:00:02.979 [Pipeline] cleanWs 00:00:02.990 [WS-CLEANUP] Deleting project workspace... 00:00:02.990 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.996 [WS-CLEANUP] done 00:00:03.268 [Pipeline] setCustomBuildProperty 00:00:03.340 [Pipeline] httpRequest 00:00:03.655 [Pipeline] echo 00:00:03.656 Sorcerer 10.211.164.20 is alive 00:00:03.664 [Pipeline] retry 00:00:03.666 [Pipeline] { 00:00:03.679 [Pipeline] httpRequest 00:00:03.684 HttpMethod: GET 00:00:03.685 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.685 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.686 Response Code: HTTP/1.1 200 OK 00:00:03.686 Success: Status code 200 is in the accepted range: 200,404 00:00:03.687 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.832 [Pipeline] } 00:00:03.844 [Pipeline] // retry 00:00:03.850 [Pipeline] sh 00:00:04.222 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.238 [Pipeline] httpRequest 00:00:04.597 [Pipeline] echo 00:00:04.599 Sorcerer 10.211.164.20 is alive 00:00:04.607 [Pipeline] retry 00:00:04.609 [Pipeline] { 00:00:04.622 [Pipeline] httpRequest 00:00:04.626 HttpMethod: GET 00:00:04.626 URL: http://10.211.164.20/packages/spdk_658cb4c046f436357f3704c3b66770d3fa5a8123.tar.gz 00:00:04.627 Sending request to url: http://10.211.164.20/packages/spdk_658cb4c046f436357f3704c3b66770d3fa5a8123.tar.gz 00:00:04.627 Response Code: HTTP/1.1 200 OK 00:00:04.628 Success: Status code 200 is in the accepted range: 200,404 00:00:04.629 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/spdk_658cb4c046f436357f3704c3b66770d3fa5a8123.tar.gz 00:00:20.752 [Pipeline] } 00:00:20.769 [Pipeline] // retry 00:00:20.776 [Pipeline] sh 00:00:21.054 + tar --no-same-owner -xf spdk_658cb4c046f436357f3704c3b66770d3fa5a8123.tar.gz 00:00:24.354 [Pipeline] sh 00:00:24.657 + git -C spdk log --oneline -n5 00:00:24.657 658cb4c04 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:00:24.657 fc308e3c5 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:00:24.657 e43b3b914 bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:00:24.657 752c08b51 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:00:24.657 22fe262e0 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:00:24.675 [Pipeline] writeFile 00:00:24.690 [Pipeline] sh 00:00:24.978 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:24.988 [Pipeline] sh 00:00:25.262 + cat autorun-spdk.conf 00:00:25.262 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.262 SPDK_RUN_ASAN=1 00:00:25.262 SPDK_RUN_UBSAN=1 00:00:25.262 SPDK_TEST_RAID=1 00:00:25.262 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.268 RUN_NIGHTLY=0 00:00:25.271 [Pipeline] } 00:00:25.284 [Pipeline] // stage 00:00:25.301 [Pipeline] stage 00:00:25.303 [Pipeline] { (Run VM) 00:00:25.316 [Pipeline] sh 00:00:25.598 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:25.598 + echo 'Start stage prepare_nvme.sh' 00:00:25.598 Start stage prepare_nvme.sh 00:00:25.598 + [[ -n 4 ]] 00:00:25.598 + disk_prefix=ex4 00:00:25.598 + [[ -n /var/jenkins/workspace/raid-vg-autotest_3 ]] 00:00:25.598 + [[ -e /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf ]] 00:00:25.598 + source /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf 00:00:25.598 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.598 ++ SPDK_RUN_ASAN=1 00:00:25.598 ++ SPDK_RUN_UBSAN=1 00:00:25.598 ++ SPDK_TEST_RAID=1 00:00:25.598 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.598 ++ RUN_NIGHTLY=0 00:00:25.598 + cd /var/jenkins/workspace/raid-vg-autotest_3 00:00:25.598 + nvme_files=() 00:00:25.598 + declare -A nvme_files 00:00:25.598 + backend_dir=/var/lib/libvirt/images/backends 00:00:25.598 + nvme_files['nvme.img']=5G 00:00:25.598 + nvme_files['nvme-cmb.img']=5G 00:00:25.598 + nvme_files['nvme-multi0.img']=4G 00:00:25.598 + nvme_files['nvme-multi1.img']=4G 00:00:25.598 + nvme_files['nvme-multi2.img']=4G 00:00:25.598 + nvme_files['nvme-openstack.img']=8G 00:00:25.598 + nvme_files['nvme-zns.img']=5G 00:00:25.598 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:25.598 + (( SPDK_TEST_FTL == 1 )) 00:00:25.598 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:25.598 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:25.598 + for nvme in "${!nvme_files[@]}" 00:00:25.598 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:25.598 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.857 + for nvme in "${!nvme_files[@]}" 00:00:25.857 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:25.857 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.857 + for nvme in "${!nvme_files[@]}" 00:00:25.857 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:25.857 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:25.857 + for nvme in "${!nvme_files[@]}" 00:00:25.857 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:25.857 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.857 + for nvme in "${!nvme_files[@]}" 00:00:25.857 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:26.117 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.117 + for nvme in "${!nvme_files[@]}" 00:00:26.117 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:27.055 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:27.055 + for nvme in "${!nvme_files[@]}" 00:00:27.055 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:27.622 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:27.881 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:27.881 + echo 'End stage prepare_nvme.sh' 00:00:27.881 End stage prepare_nvme.sh 00:00:27.893 [Pipeline] sh 00:00:28.174 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:28.174 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:00:28.174 00:00:28.174 DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant 00:00:28.174 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk 00:00:28.174 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_3 00:00:28.174 HELP=0 00:00:28.174 DRY_RUN=0 00:00:28.174 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:28.174 NVME_DISKS_TYPE=nvme,nvme, 00:00:28.174 NVME_AUTO_CREATE=0 00:00:28.174 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:28.174 NVME_CMB=,, 00:00:28.174 NVME_PMR=,, 00:00:28.174 NVME_ZNS=,, 00:00:28.174 NVME_MS=,, 00:00:28.174 NVME_FDP=,, 00:00:28.174 SPDK_VAGRANT_DISTRO=fedora39 00:00:28.174 SPDK_VAGRANT_VMCPU=10 00:00:28.174 SPDK_VAGRANT_VMRAM=12288 00:00:28.174 SPDK_VAGRANT_PROVIDER=libvirt 00:00:28.174 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:28.174 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:28.174 SPDK_OPENSTACK_NETWORK=0 00:00:28.174 VAGRANT_PACKAGE_BOX=0 00:00:28.174 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:00:28.174 FORCE_DISTRO=true 00:00:28.174 VAGRANT_BOX_VERSION= 00:00:28.174 EXTRA_VAGRANTFILES= 00:00:28.174 NIC_MODEL=e1000 00:00:28.174 00:00:28.174 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt' 00:00:28.174 /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_3 00:00:31.461 Bringing machine 'default' up with 'libvirt' provider... 00:00:32.028 ==> default: Creating image (snapshot of base box volume). 00:00:32.287 ==> default: Creating domain with the following settings... 00:00:32.287 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732646843_1ee3a77cd74d37c7c2ad 00:00:32.287 ==> default: -- Domain type: kvm 00:00:32.287 ==> default: -- Cpus: 10 00:00:32.287 ==> default: -- Feature: acpi 00:00:32.287 ==> default: -- Feature: apic 00:00:32.287 ==> default: -- Feature: pae 00:00:32.287 ==> default: -- Memory: 12288M 00:00:32.287 ==> default: -- Memory Backing: hugepages: 00:00:32.287 ==> default: -- Management MAC: 00:00:32.287 ==> default: -- Loader: 00:00:32.287 ==> default: -- Nvram: 00:00:32.287 ==> default: -- Base box: spdk/fedora39 00:00:32.287 ==> default: -- Storage pool: default 00:00:32.287 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732646843_1ee3a77cd74d37c7c2ad.img (20G) 00:00:32.287 ==> default: -- Volume Cache: default 00:00:32.287 ==> default: -- Kernel: 00:00:32.287 ==> default: -- Initrd: 00:00:32.287 ==> default: -- Graphics Type: vnc 00:00:32.287 ==> default: -- Graphics Port: -1 00:00:32.287 ==> default: -- Graphics IP: 127.0.0.1 00:00:32.287 ==> default: -- Graphics Password: Not defined 00:00:32.287 ==> default: -- Video Type: cirrus 00:00:32.287 ==> default: -- Video VRAM: 9216 00:00:32.287 ==> default: -- Sound Type: 00:00:32.287 ==> default: -- Keymap: en-us 00:00:32.287 ==> default: -- TPM Path: 00:00:32.287 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:32.287 ==> default: -- Command line args: 00:00:32.287 ==> default: -> value=-device, 00:00:32.287 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:32.287 ==> default: -> value=-drive, 00:00:32.287 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:32.287 ==> default: -> value=-device, 00:00:32.287 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.287 ==> default: -> value=-device, 00:00:32.287 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:32.287 ==> default: -> value=-drive, 00:00:32.287 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:32.287 ==> default: -> value=-device, 00:00:32.287 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.287 ==> default: -> value=-drive, 00:00:32.287 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:32.287 ==> default: -> value=-device, 00:00:32.287 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.287 ==> default: -> value=-drive, 00:00:32.287 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:32.287 ==> default: -> value=-device, 00:00:32.287 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.287 ==> default: Creating shared folders metadata... 00:00:32.287 ==> default: Starting domain. 00:00:34.189 ==> default: Waiting for domain to get an IP address... 00:00:49.062 ==> default: Waiting for SSH to become available... 00:00:50.434 ==> default: Configuring and enabling network interfaces... 00:00:54.716 default: SSH address: 192.168.121.38:22 00:00:54.716 default: SSH username: vagrant 00:00:54.716 default: SSH auth method: private key 00:00:56.661 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:04.804 ==> default: Mounting SSHFS shared folder... 00:01:05.742 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:05.742 ==> default: Checking Mount.. 00:01:07.119 ==> default: Folder Successfully Mounted! 00:01:07.119 ==> default: Running provisioner: file... 00:01:08.057 default: ~/.gitconfig => .gitconfig 00:01:08.316 00:01:08.316 SUCCESS! 00:01:08.316 00:01:08.316 cd to /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:01:08.316 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:08.316 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:01:08.316 00:01:08.325 [Pipeline] } 00:01:08.340 [Pipeline] // stage 00:01:08.349 [Pipeline] dir 00:01:08.350 Running in /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt 00:01:08.351 [Pipeline] { 00:01:08.365 [Pipeline] catchError 00:01:08.367 [Pipeline] { 00:01:08.380 [Pipeline] sh 00:01:08.664 + vagrant ssh-config --host vagrant 00:01:08.664 + sed -ne /^Host/,$p 00:01:08.664 + tee ssh_conf 00:01:11.994 Host vagrant 00:01:11.994 HostName 192.168.121.38 00:01:11.994 User vagrant 00:01:11.994 Port 22 00:01:11.994 UserKnownHostsFile /dev/null 00:01:11.994 StrictHostKeyChecking no 00:01:11.994 PasswordAuthentication no 00:01:11.994 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:11.994 IdentitiesOnly yes 00:01:11.994 LogLevel FATAL 00:01:11.994 ForwardAgent yes 00:01:11.994 ForwardX11 yes 00:01:11.994 00:01:12.009 [Pipeline] withEnv 00:01:12.012 [Pipeline] { 00:01:12.028 [Pipeline] sh 00:01:12.308 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:12.308 source /etc/os-release 00:01:12.308 [[ -e /image.version ]] && img=$(< /image.version) 00:01:12.308 # Minimal, systemd-like check. 00:01:12.308 if [[ -e /.dockerenv ]]; then 00:01:12.308 # Clear garbage from the node's name: 00:01:12.308 # agt-er_autotest_547-896 -> autotest_547-896 00:01:12.308 # $HOSTNAME is the actual container id 00:01:12.308 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:12.308 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:12.308 # We can assume this is a mount from a host where container is running, 00:01:12.308 # so fetch its hostname to easily identify the target swarm worker. 00:01:12.308 container="$(< /etc/hostname) ($agent)" 00:01:12.308 else 00:01:12.308 # Fallback 00:01:12.308 container=$agent 00:01:12.308 fi 00:01:12.308 fi 00:01:12.308 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:12.308 00:01:12.579 [Pipeline] } 00:01:12.596 [Pipeline] // withEnv 00:01:12.606 [Pipeline] setCustomBuildProperty 00:01:12.623 [Pipeline] stage 00:01:12.625 [Pipeline] { (Tests) 00:01:12.645 [Pipeline] sh 00:01:12.925 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:13.197 [Pipeline] sh 00:01:13.477 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:13.496 [Pipeline] timeout 00:01:13.497 Timeout set to expire in 1 hr 30 min 00:01:13.499 [Pipeline] { 00:01:13.517 [Pipeline] sh 00:01:13.798 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:14.365 HEAD is now at 658cb4c04 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:01:14.377 [Pipeline] sh 00:01:14.656 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:14.929 [Pipeline] sh 00:01:15.214 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:15.489 [Pipeline] sh 00:01:15.769 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:16.029 ++ readlink -f spdk_repo 00:01:16.029 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:16.029 + [[ -n /home/vagrant/spdk_repo ]] 00:01:16.029 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:16.029 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:16.029 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:16.029 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:16.029 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:16.029 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:16.029 + cd /home/vagrant/spdk_repo 00:01:16.029 + source /etc/os-release 00:01:16.029 ++ NAME='Fedora Linux' 00:01:16.029 ++ VERSION='39 (Cloud Edition)' 00:01:16.029 ++ ID=fedora 00:01:16.029 ++ VERSION_ID=39 00:01:16.029 ++ VERSION_CODENAME= 00:01:16.029 ++ PLATFORM_ID=platform:f39 00:01:16.029 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:16.029 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.029 ++ LOGO=fedora-logo-icon 00:01:16.029 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:16.029 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.029 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:16.029 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.029 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.029 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.029 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:16.029 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.029 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:16.029 ++ SUPPORT_END=2024-11-12 00:01:16.029 ++ VARIANT='Cloud Edition' 00:01:16.029 ++ VARIANT_ID=cloud 00:01:16.029 + uname -a 00:01:16.029 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:16.029 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:16.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:16.595 Hugepages 00:01:16.595 node hugesize free / total 00:01:16.595 node0 1048576kB 0 / 0 00:01:16.595 node0 2048kB 0 / 0 00:01:16.595 00:01:16.595 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.595 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:16.595 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:16.595 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:16.595 + rm -f /tmp/spdk-ld-path 00:01:16.595 + source autorun-spdk.conf 00:01:16.595 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.595 ++ SPDK_RUN_ASAN=1 00:01:16.595 ++ SPDK_RUN_UBSAN=1 00:01:16.595 ++ SPDK_TEST_RAID=1 00:01:16.595 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.595 ++ RUN_NIGHTLY=0 00:01:16.595 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.595 + [[ -n '' ]] 00:01:16.595 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:16.595 + for M in /var/spdk/build-*-manifest.txt 00:01:16.595 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:16.595 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.595 + for M in /var/spdk/build-*-manifest.txt 00:01:16.595 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.595 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.595 + for M in /var/spdk/build-*-manifest.txt 00:01:16.595 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.595 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.595 ++ uname 00:01:16.595 + [[ Linux == \L\i\n\u\x ]] 00:01:16.595 + sudo dmesg -T 00:01:16.595 + sudo dmesg --clear 00:01:16.595 + dmesg_pid=5213 00:01:16.595 + sudo dmesg -Tw 00:01:16.595 + [[ Fedora Linux == FreeBSD ]] 00:01:16.595 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.595 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.595 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.595 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.595 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.595 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.595 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.595 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.595 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.595 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.595 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.595 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.595 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.595 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.595 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:16.595 18:48:07 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:16.595 18:48:07 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:16.595 18:48:07 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.595 18:48:07 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:16.595 18:48:07 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:16.595 18:48:07 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:16.595 18:48:07 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.595 18:48:07 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:16.595 18:48:07 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:16.595 18:48:07 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:16.855 18:48:08 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:16.855 18:48:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:16.855 18:48:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:16.855 18:48:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.855 18:48:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.855 18:48:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.855 18:48:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.855 18:48:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.855 18:48:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.855 18:48:08 -- paths/export.sh@5 -- $ export PATH 00:01:16.855 18:48:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.855 18:48:08 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:16.855 18:48:08 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:16.855 18:48:08 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732646888.XXXXXX 00:01:16.855 18:48:08 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732646888.lGeDFA 00:01:16.855 18:48:08 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:16.855 18:48:08 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:16.855 18:48:08 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:16.855 18:48:08 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:16.855 18:48:08 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.855 18:48:08 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:16.855 18:48:08 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:16.855 18:48:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.855 18:48:08 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:16.855 18:48:08 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:16.855 18:48:08 -- pm/common@17 -- $ local monitor 00:01:16.855 18:48:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.855 18:48:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.855 18:48:08 -- pm/common@21 -- $ date +%s 00:01:16.855 18:48:08 -- pm/common@25 -- $ sleep 1 00:01:16.855 18:48:08 -- pm/common@21 -- $ date +%s 00:01:16.855 18:48:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732646888 00:01:16.855 18:48:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732646888 00:01:16.855 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732646888_collect-vmstat.pm.log 00:01:16.855 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732646888_collect-cpu-load.pm.log 00:01:17.850 18:48:09 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:17.850 18:48:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.850 18:48:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.850 18:48:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:17.850 18:48:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.850 Tue Nov 26 06:48:09 PM UTC 2024 00:01:17.850 18:48:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.850 v25.01-pre-251-g658cb4c04 00:01:17.850 18:48:09 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:17.851 18:48:09 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:17.851 18:48:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:17.851 18:48:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:17.851 18:48:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.851 ************************************ 00:01:17.851 START TEST asan 00:01:17.851 ************************************ 00:01:17.851 using asan 00:01:17.851 18:48:09 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:17.851 00:01:17.851 real 0m0.000s 00:01:17.851 user 0m0.000s 00:01:17.851 sys 0m0.000s 00:01:17.851 18:48:09 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:17.851 ************************************ 00:01:17.851 END TEST asan 00:01:17.851 18:48:09 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.851 ************************************ 00:01:17.851 18:48:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.851 18:48:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.851 18:48:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:17.851 18:48:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:17.851 18:48:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.851 ************************************ 00:01:17.851 START TEST ubsan 00:01:17.851 ************************************ 00:01:17.851 using ubsan 00:01:17.851 18:48:09 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:17.851 00:01:17.851 real 0m0.000s 00:01:17.851 user 0m0.000s 00:01:17.851 sys 0m0.000s 00:01:17.851 18:48:09 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:17.851 18:48:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.851 ************************************ 00:01:17.851 END TEST ubsan 00:01:17.851 ************************************ 00:01:17.851 18:48:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.851 18:48:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.851 18:48:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.851 18:48:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.851 18:48:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.851 18:48:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.851 18:48:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.851 18:48:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.851 18:48:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:18.109 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:18.109 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:18.677 Using 'verbs' RDMA provider 00:01:34.573 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:46.776 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:46.776 Creating mk/config.mk...done. 00:01:46.776 Creating mk/cc.flags.mk...done. 00:01:46.776 Type 'make' to build. 00:01:46.776 18:48:37 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:46.776 18:48:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:46.776 18:48:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:46.776 18:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.776 ************************************ 00:01:46.776 START TEST make 00:01:46.776 ************************************ 00:01:46.776 18:48:37 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:46.776 make[1]: Nothing to be done for 'all'. 00:01:58.979 The Meson build system 00:01:58.979 Version: 1.5.0 00:01:58.979 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:58.979 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:58.979 Build type: native build 00:01:58.979 Program cat found: YES (/usr/bin/cat) 00:01:58.979 Project name: DPDK 00:01:58.979 Project version: 24.03.0 00:01:58.979 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:58.979 C linker for the host machine: cc ld.bfd 2.40-14 00:01:58.979 Host machine cpu family: x86_64 00:01:58.979 Host machine cpu: x86_64 00:01:58.979 Message: ## Building in Developer Mode ## 00:01:58.979 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.979 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:58.979 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.979 Program python3 found: YES (/usr/bin/python3) 00:01:58.979 Program cat found: YES (/usr/bin/cat) 00:01:58.979 Compiler for C supports arguments -march=native: YES 00:01:58.979 Checking for size of "void *" : 8 00:01:58.979 Checking for size of "void *" : 8 (cached) 00:01:58.979 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:58.979 Library m found: YES 00:01:58.979 Library numa found: YES 00:01:58.979 Has header "numaif.h" : YES 00:01:58.979 Library fdt found: NO 00:01:58.979 Library execinfo found: NO 00:01:58.979 Has header "execinfo.h" : YES 00:01:58.979 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:58.979 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.979 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.979 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.979 Run-time dependency openssl found: YES 3.1.1 00:01:58.979 Run-time dependency libpcap found: YES 1.10.4 00:01:58.979 Has header "pcap.h" with dependency libpcap: YES 00:01:58.979 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.979 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.979 Compiler for C supports arguments -Wformat: YES 00:01:58.979 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.979 Compiler for C supports arguments -Wformat-security: NO 00:01:58.979 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.979 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.979 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.979 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.979 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.979 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.979 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.979 Compiler for C supports arguments -Wundef: YES 00:01:58.979 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.979 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.979 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.979 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.979 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.979 Program objdump found: YES (/usr/bin/objdump) 00:01:58.979 Compiler for C supports arguments -mavx512f: YES 00:01:58.979 Checking if "AVX512 checking" compiles: YES 00:01:58.979 Fetching value of define "__SSE4_2__" : 1 00:01:58.979 Fetching value of define "__AES__" : 1 00:01:58.979 Fetching value of define "__AVX__" : 1 00:01:58.979 Fetching value of define "__AVX2__" : 1 00:01:58.979 Fetching value of define "__AVX512BW__" : (undefined) 00:01:58.979 Fetching value of define "__AVX512CD__" : (undefined) 00:01:58.979 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:58.979 Fetching value of define "__AVX512F__" : (undefined) 00:01:58.979 Fetching value of define "__AVX512VL__" : (undefined) 00:01:58.979 Fetching value of define "__PCLMUL__" : 1 00:01:58.979 Fetching value of define "__RDRND__" : 1 00:01:58.979 Fetching value of define "__RDSEED__" : 1 00:01:58.979 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:58.979 Fetching value of define "__znver1__" : (undefined) 00:01:58.979 Fetching value of define "__znver2__" : (undefined) 00:01:58.979 Fetching value of define "__znver3__" : (undefined) 00:01:58.979 Fetching value of define "__znver4__" : (undefined) 00:01:58.979 Library asan found: YES 00:01:58.979 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.979 Message: lib/log: Defining dependency "log" 00:01:58.979 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.979 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.979 Library rt found: YES 00:01:58.979 Checking for function "getentropy" : NO 00:01:58.979 Message: lib/eal: Defining dependency "eal" 00:01:58.979 Message: lib/ring: Defining dependency "ring" 00:01:58.979 Message: lib/rcu: Defining dependency "rcu" 00:01:58.979 Message: lib/mempool: Defining dependency "mempool" 00:01:58.979 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.979 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.979 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:58.979 Compiler for C supports arguments -mpclmul: YES 00:01:58.979 Compiler for C supports arguments -maes: YES 00:01:58.979 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.979 Compiler for C supports arguments -mavx512bw: YES 00:01:58.979 Compiler for C supports arguments -mavx512dq: YES 00:01:58.979 Compiler for C supports arguments -mavx512vl: YES 00:01:58.979 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.979 Compiler for C supports arguments -mavx2: YES 00:01:58.979 Compiler for C supports arguments -mavx: YES 00:01:58.979 Message: lib/net: Defining dependency "net" 00:01:58.979 Message: lib/meter: Defining dependency "meter" 00:01:58.979 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.979 Message: lib/pci: Defining dependency "pci" 00:01:58.979 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.979 Message: lib/hash: Defining dependency "hash" 00:01:58.979 Message: lib/timer: Defining dependency "timer" 00:01:58.979 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.979 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.979 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.979 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.979 Message: lib/power: Defining dependency "power" 00:01:58.979 Message: lib/reorder: Defining dependency "reorder" 00:01:58.979 Message: lib/security: Defining dependency "security" 00:01:58.979 Has header "linux/userfaultfd.h" : YES 00:01:58.979 Has header "linux/vduse.h" : YES 00:01:58.979 Message: lib/vhost: Defining dependency "vhost" 00:01:58.979 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.979 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.979 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.979 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.979 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:58.979 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:58.979 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:58.979 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:58.979 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:58.979 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:58.979 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:58.979 Configuring doxy-api-html.conf using configuration 00:01:58.979 Configuring doxy-api-man.conf using configuration 00:01:58.979 Program mandb found: YES (/usr/bin/mandb) 00:01:58.979 Program sphinx-build found: NO 00:01:58.979 Configuring rte_build_config.h using configuration 00:01:58.979 Message: 00:01:58.979 ================= 00:01:58.979 Applications Enabled 00:01:58.979 ================= 00:01:58.979 00:01:58.979 apps: 00:01:58.979 00:01:58.979 00:01:58.979 Message: 00:01:58.980 ================= 00:01:58.980 Libraries Enabled 00:01:58.980 ================= 00:01:58.980 00:01:58.980 libs: 00:01:58.980 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.980 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:58.980 cryptodev, dmadev, power, reorder, security, vhost, 00:01:58.980 00:01:58.980 Message: 00:01:58.980 =============== 00:01:58.980 Drivers Enabled 00:01:58.980 =============== 00:01:58.980 00:01:58.980 common: 00:01:58.980 00:01:58.980 bus: 00:01:58.980 pci, vdev, 00:01:58.980 mempool: 00:01:58.980 ring, 00:01:58.980 dma: 00:01:58.980 00:01:58.980 net: 00:01:58.980 00:01:58.980 crypto: 00:01:58.980 00:01:58.980 compress: 00:01:58.980 00:01:58.980 vdpa: 00:01:58.980 00:01:58.980 00:01:58.980 Message: 00:01:58.980 ================= 00:01:58.980 Content Skipped 00:01:58.980 ================= 00:01:58.980 00:01:58.980 apps: 00:01:58.980 dumpcap: explicitly disabled via build config 00:01:58.980 graph: explicitly disabled via build config 00:01:58.980 pdump: explicitly disabled via build config 00:01:58.980 proc-info: explicitly disabled via build config 00:01:58.980 test-acl: explicitly disabled via build config 00:01:58.980 test-bbdev: explicitly disabled via build config 00:01:58.980 test-cmdline: explicitly disabled via build config 00:01:58.980 test-compress-perf: explicitly disabled via build config 00:01:58.980 test-crypto-perf: explicitly disabled via build config 00:01:58.980 test-dma-perf: explicitly disabled via build config 00:01:58.980 test-eventdev: explicitly disabled via build config 00:01:58.980 test-fib: explicitly disabled via build config 00:01:58.980 test-flow-perf: explicitly disabled via build config 00:01:58.980 test-gpudev: explicitly disabled via build config 00:01:58.980 test-mldev: explicitly disabled via build config 00:01:58.980 test-pipeline: explicitly disabled via build config 00:01:58.980 test-pmd: explicitly disabled via build config 00:01:58.980 test-regex: explicitly disabled via build config 00:01:58.980 test-sad: explicitly disabled via build config 00:01:58.980 test-security-perf: explicitly disabled via build config 00:01:58.980 00:01:58.980 libs: 00:01:58.980 argparse: explicitly disabled via build config 00:01:58.980 metrics: explicitly disabled via build config 00:01:58.980 acl: explicitly disabled via build config 00:01:58.980 bbdev: explicitly disabled via build config 00:01:58.980 bitratestats: explicitly disabled via build config 00:01:58.980 bpf: explicitly disabled via build config 00:01:58.980 cfgfile: explicitly disabled via build config 00:01:58.980 distributor: explicitly disabled via build config 00:01:58.980 efd: explicitly disabled via build config 00:01:58.980 eventdev: explicitly disabled via build config 00:01:58.980 dispatcher: explicitly disabled via build config 00:01:58.980 gpudev: explicitly disabled via build config 00:01:58.980 gro: explicitly disabled via build config 00:01:58.980 gso: explicitly disabled via build config 00:01:58.980 ip_frag: explicitly disabled via build config 00:01:58.980 jobstats: explicitly disabled via build config 00:01:58.980 latencystats: explicitly disabled via build config 00:01:58.980 lpm: explicitly disabled via build config 00:01:58.980 member: explicitly disabled via build config 00:01:58.980 pcapng: explicitly disabled via build config 00:01:58.980 rawdev: explicitly disabled via build config 00:01:58.980 regexdev: explicitly disabled via build config 00:01:58.980 mldev: explicitly disabled via build config 00:01:58.980 rib: explicitly disabled via build config 00:01:58.980 sched: explicitly disabled via build config 00:01:58.980 stack: explicitly disabled via build config 00:01:58.980 ipsec: explicitly disabled via build config 00:01:58.980 pdcp: explicitly disabled via build config 00:01:58.980 fib: explicitly disabled via build config 00:01:58.980 port: explicitly disabled via build config 00:01:58.980 pdump: explicitly disabled via build config 00:01:58.980 table: explicitly disabled via build config 00:01:58.980 pipeline: explicitly disabled via build config 00:01:58.980 graph: explicitly disabled via build config 00:01:58.980 node: explicitly disabled via build config 00:01:58.980 00:01:58.980 drivers: 00:01:58.980 common/cpt: not in enabled drivers build config 00:01:58.980 common/dpaax: not in enabled drivers build config 00:01:58.980 common/iavf: not in enabled drivers build config 00:01:58.980 common/idpf: not in enabled drivers build config 00:01:58.980 common/ionic: not in enabled drivers build config 00:01:58.980 common/mvep: not in enabled drivers build config 00:01:58.980 common/octeontx: not in enabled drivers build config 00:01:58.980 bus/auxiliary: not in enabled drivers build config 00:01:58.980 bus/cdx: not in enabled drivers build config 00:01:58.980 bus/dpaa: not in enabled drivers build config 00:01:58.980 bus/fslmc: not in enabled drivers build config 00:01:58.980 bus/ifpga: not in enabled drivers build config 00:01:58.980 bus/platform: not in enabled drivers build config 00:01:58.980 bus/uacce: not in enabled drivers build config 00:01:58.980 bus/vmbus: not in enabled drivers build config 00:01:58.980 common/cnxk: not in enabled drivers build config 00:01:58.980 common/mlx5: not in enabled drivers build config 00:01:58.980 common/nfp: not in enabled drivers build config 00:01:58.980 common/nitrox: not in enabled drivers build config 00:01:58.980 common/qat: not in enabled drivers build config 00:01:58.980 common/sfc_efx: not in enabled drivers build config 00:01:58.980 mempool/bucket: not in enabled drivers build config 00:01:58.980 mempool/cnxk: not in enabled drivers build config 00:01:58.980 mempool/dpaa: not in enabled drivers build config 00:01:58.980 mempool/dpaa2: not in enabled drivers build config 00:01:58.980 mempool/octeontx: not in enabled drivers build config 00:01:58.980 mempool/stack: not in enabled drivers build config 00:01:58.980 dma/cnxk: not in enabled drivers build config 00:01:58.980 dma/dpaa: not in enabled drivers build config 00:01:58.980 dma/dpaa2: not in enabled drivers build config 00:01:58.980 dma/hisilicon: not in enabled drivers build config 00:01:58.980 dma/idxd: not in enabled drivers build config 00:01:58.980 dma/ioat: not in enabled drivers build config 00:01:58.980 dma/skeleton: not in enabled drivers build config 00:01:58.980 net/af_packet: not in enabled drivers build config 00:01:58.980 net/af_xdp: not in enabled drivers build config 00:01:58.980 net/ark: not in enabled drivers build config 00:01:58.980 net/atlantic: not in enabled drivers build config 00:01:58.980 net/avp: not in enabled drivers build config 00:01:58.980 net/axgbe: not in enabled drivers build config 00:01:58.980 net/bnx2x: not in enabled drivers build config 00:01:58.980 net/bnxt: not in enabled drivers build config 00:01:58.980 net/bonding: not in enabled drivers build config 00:01:58.980 net/cnxk: not in enabled drivers build config 00:01:58.980 net/cpfl: not in enabled drivers build config 00:01:58.980 net/cxgbe: not in enabled drivers build config 00:01:58.980 net/dpaa: not in enabled drivers build config 00:01:58.980 net/dpaa2: not in enabled drivers build config 00:01:58.980 net/e1000: not in enabled drivers build config 00:01:58.980 net/ena: not in enabled drivers build config 00:01:58.980 net/enetc: not in enabled drivers build config 00:01:58.980 net/enetfec: not in enabled drivers build config 00:01:58.980 net/enic: not in enabled drivers build config 00:01:58.980 net/failsafe: not in enabled drivers build config 00:01:58.980 net/fm10k: not in enabled drivers build config 00:01:58.980 net/gve: not in enabled drivers build config 00:01:58.980 net/hinic: not in enabled drivers build config 00:01:58.980 net/hns3: not in enabled drivers build config 00:01:58.980 net/i40e: not in enabled drivers build config 00:01:58.980 net/iavf: not in enabled drivers build config 00:01:58.980 net/ice: not in enabled drivers build config 00:01:58.980 net/idpf: not in enabled drivers build config 00:01:58.980 net/igc: not in enabled drivers build config 00:01:58.980 net/ionic: not in enabled drivers build config 00:01:58.980 net/ipn3ke: not in enabled drivers build config 00:01:58.980 net/ixgbe: not in enabled drivers build config 00:01:58.980 net/mana: not in enabled drivers build config 00:01:58.980 net/memif: not in enabled drivers build config 00:01:58.980 net/mlx4: not in enabled drivers build config 00:01:58.980 net/mlx5: not in enabled drivers build config 00:01:58.980 net/mvneta: not in enabled drivers build config 00:01:58.980 net/mvpp2: not in enabled drivers build config 00:01:58.980 net/netvsc: not in enabled drivers build config 00:01:58.980 net/nfb: not in enabled drivers build config 00:01:58.980 net/nfp: not in enabled drivers build config 00:01:58.980 net/ngbe: not in enabled drivers build config 00:01:58.980 net/null: not in enabled drivers build config 00:01:58.980 net/octeontx: not in enabled drivers build config 00:01:58.980 net/octeon_ep: not in enabled drivers build config 00:01:58.980 net/pcap: not in enabled drivers build config 00:01:58.980 net/pfe: not in enabled drivers build config 00:01:58.980 net/qede: not in enabled drivers build config 00:01:58.980 net/ring: not in enabled drivers build config 00:01:58.980 net/sfc: not in enabled drivers build config 00:01:58.980 net/softnic: not in enabled drivers build config 00:01:58.980 net/tap: not in enabled drivers build config 00:01:58.980 net/thunderx: not in enabled drivers build config 00:01:58.980 net/txgbe: not in enabled drivers build config 00:01:58.980 net/vdev_netvsc: not in enabled drivers build config 00:01:58.980 net/vhost: not in enabled drivers build config 00:01:58.980 net/virtio: not in enabled drivers build config 00:01:58.980 net/vmxnet3: not in enabled drivers build config 00:01:58.980 raw/*: missing internal dependency, "rawdev" 00:01:58.980 crypto/armv8: not in enabled drivers build config 00:01:58.980 crypto/bcmfs: not in enabled drivers build config 00:01:58.980 crypto/caam_jr: not in enabled drivers build config 00:01:58.980 crypto/ccp: not in enabled drivers build config 00:01:58.980 crypto/cnxk: not in enabled drivers build config 00:01:58.980 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.981 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.981 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.981 crypto/mlx5: not in enabled drivers build config 00:01:58.981 crypto/mvsam: not in enabled drivers build config 00:01:58.981 crypto/nitrox: not in enabled drivers build config 00:01:58.981 crypto/null: not in enabled drivers build config 00:01:58.981 crypto/octeontx: not in enabled drivers build config 00:01:58.981 crypto/openssl: not in enabled drivers build config 00:01:58.981 crypto/scheduler: not in enabled drivers build config 00:01:58.981 crypto/uadk: not in enabled drivers build config 00:01:58.981 crypto/virtio: not in enabled drivers build config 00:01:58.981 compress/isal: not in enabled drivers build config 00:01:58.981 compress/mlx5: not in enabled drivers build config 00:01:58.981 compress/nitrox: not in enabled drivers build config 00:01:58.981 compress/octeontx: not in enabled drivers build config 00:01:58.981 compress/zlib: not in enabled drivers build config 00:01:58.981 regex/*: missing internal dependency, "regexdev" 00:01:58.981 ml/*: missing internal dependency, "mldev" 00:01:58.981 vdpa/ifc: not in enabled drivers build config 00:01:58.981 vdpa/mlx5: not in enabled drivers build config 00:01:58.981 vdpa/nfp: not in enabled drivers build config 00:01:58.981 vdpa/sfc: not in enabled drivers build config 00:01:58.981 event/*: missing internal dependency, "eventdev" 00:01:58.981 baseband/*: missing internal dependency, "bbdev" 00:01:58.981 gpu/*: missing internal dependency, "gpudev" 00:01:58.981 00:01:58.981 00:01:58.981 Build targets in project: 85 00:01:58.981 00:01:58.981 DPDK 24.03.0 00:01:58.981 00:01:58.981 User defined options 00:01:58.981 buildtype : debug 00:01:58.981 default_library : shared 00:01:58.981 libdir : lib 00:01:58.981 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:58.981 b_sanitize : address 00:01:58.981 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:58.981 c_link_args : 00:01:58.981 cpu_instruction_set: native 00:01:58.981 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:58.981 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:58.981 enable_docs : false 00:01:58.981 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:58.981 enable_kmods : false 00:01:58.981 max_lcores : 128 00:01:58.981 tests : false 00:01:58.981 00:01:58.981 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.239 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:59.498 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.498 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.498 [3/268] Linking static target lib/librte_kvargs.a 00:01:59.498 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.498 [5/268] Linking static target lib/librte_log.a 00:01:59.498 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.065 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.065 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.065 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.065 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.324 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.324 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.324 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.324 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.583 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.583 [16/268] Linking static target lib/librte_telemetry.a 00:02:00.583 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.583 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.583 [19/268] Linking target lib/librte_log.so.24.1 00:02:00.896 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.896 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:00.896 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.154 [23/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.154 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.154 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.413 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.413 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.413 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.413 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.413 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.413 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.413 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.413 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.672 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:01.931 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.931 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.189 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.189 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.189 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.189 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.189 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.448 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.448 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.448 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.707 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.707 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.707 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.707 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.965 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.223 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.223 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.223 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.482 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.482 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.482 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.739 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.739 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.739 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.997 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.997 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.997 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.255 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.255 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.255 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.513 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.513 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.513 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:04.513 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.771 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.771 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.772 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.030 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.030 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.030 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.030 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.318 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.318 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.318 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.318 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.318 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.577 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.577 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.577 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.577 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:05.837 [85/268] Linking static target lib/librte_eal.a 00:02:05.837 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:05.837 [87/268] Linking static target lib/librte_ring.a 00:02:05.837 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.096 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.096 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.355 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.355 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.355 [93/268] Linking static target lib/librte_mempool.a 00:02:06.355 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.355 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.355 [96/268] Linking static target lib/librte_rcu.a 00:02:06.355 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.615 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.873 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.132 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.132 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.132 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.132 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.132 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.132 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.132 [106/268] Linking static target lib/librte_mbuf.a 00:02:07.390 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.390 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.390 [109/268] Linking static target lib/librte_net.a 00:02:07.649 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.649 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.649 [112/268] Linking static target lib/librte_meter.a 00:02:07.649 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.908 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.908 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.909 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.167 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.167 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.428 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.692 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.692 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.692 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.950 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.210 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.210 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.469 [126/268] Linking static target lib/librte_pci.a 00:02:09.469 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.469 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.469 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.469 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.469 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.728 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.728 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.729 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.729 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.729 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.729 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.729 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.729 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.729 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.729 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.988 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.988 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:09.988 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.988 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.248 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.248 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.248 [148/268] Linking static target lib/librte_cmdline.a 00:02:10.506 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.506 [150/268] Linking static target lib/librte_timer.a 00:02:10.765 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:10.765 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.765 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.765 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.023 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.023 [156/268] Linking static target lib/librte_ethdev.a 00:02:11.023 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:11.282 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.282 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.541 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.541 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.541 [162/268] Linking static target lib/librte_hash.a 00:02:11.541 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.799 [164/268] Linking static target lib/librte_compressdev.a 00:02:11.799 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.799 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.799 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.799 [168/268] Linking static target lib/librte_dmadev.a 00:02:11.799 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.058 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:12.058 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.058 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.626 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.626 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.626 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.626 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.626 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.884 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.884 [179/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.142 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:13.142 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.142 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:13.142 [183/268] Linking static target lib/librte_cryptodev.a 00:02:13.399 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.658 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.658 [186/268] Linking static target lib/librte_reorder.a 00:02:13.658 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.915 [188/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.915 [189/268] Linking static target lib/librte_power.a 00:02:13.915 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.173 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.173 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.173 [193/268] Linking static target lib/librte_security.a 00:02:14.173 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.107 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.107 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.107 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.107 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.107 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.107 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.674 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.674 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:15.932 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.932 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.932 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.932 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.191 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:16.450 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.450 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.450 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.450 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:16.708 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:16.708 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.708 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:16.708 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:16.708 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:16.708 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.708 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.708 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:16.708 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:16.708 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:16.967 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.967 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.967 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.967 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.967 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:17.225 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.794 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.057 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.057 [230/268] Linking target lib/librte_eal.so.24.1 00:02:18.315 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:18.315 [232/268] Linking target lib/librte_ring.so.24.1 00:02:18.315 [233/268] Linking target lib/librte_pci.so.24.1 00:02:18.315 [234/268] Linking target lib/librte_meter.so.24.1 00:02:18.315 [235/268] Linking target lib/librte_timer.so.24.1 00:02:18.315 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:18.315 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:18.315 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:18.315 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:18.315 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:18.575 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:18.575 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:18.575 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:18.575 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:18.575 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:18.575 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:18.575 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:18.837 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:18.837 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:18.837 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.837 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:18.837 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:02:18.837 [253/268] Linking target lib/librte_net.so.24.1 00:02:18.837 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:19.095 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:19.095 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:19.095 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:19.095 [258/268] Linking target lib/librte_security.so.24.1 00:02:19.095 [259/268] Linking target lib/librte_hash.so.24.1 00:02:19.352 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:19.610 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.610 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:19.869 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:19.869 [264/268] Linking target lib/librte_power.so.24.1 00:02:23.150 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.150 [266/268] Linking static target lib/librte_vhost.a 00:02:24.526 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.526 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:24.526 INFO: autodetecting backend as ninja 00:02:24.526 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:46.454 CC lib/ut/ut.o 00:02:46.454 CC lib/ut_mock/mock.o 00:02:46.454 CC lib/log/log.o 00:02:46.454 CC lib/log/log_flags.o 00:02:46.454 CC lib/log/log_deprecated.o 00:02:46.454 LIB libspdk_ut.a 00:02:46.454 SO libspdk_ut.so.2.0 00:02:46.454 LIB libspdk_ut_mock.a 00:02:46.454 LIB libspdk_log.a 00:02:46.454 SO libspdk_ut_mock.so.6.0 00:02:46.454 SO libspdk_log.so.7.1 00:02:46.454 SYMLINK libspdk_ut.so 00:02:46.454 SYMLINK libspdk_ut_mock.so 00:02:46.454 SYMLINK libspdk_log.so 00:02:46.454 CC lib/dma/dma.o 00:02:46.454 CC lib/ioat/ioat.o 00:02:46.454 CC lib/util/base64.o 00:02:46.454 CC lib/util/cpuset.o 00:02:46.454 CC lib/util/bit_array.o 00:02:46.454 CC lib/util/crc32.o 00:02:46.454 CC lib/util/crc16.o 00:02:46.454 CC lib/util/crc32c.o 00:02:46.454 CXX lib/trace_parser/trace.o 00:02:46.454 CC lib/vfio_user/host/vfio_user_pci.o 00:02:46.454 CC lib/util/crc32_ieee.o 00:02:46.454 CC lib/vfio_user/host/vfio_user.o 00:02:46.454 CC lib/util/crc64.o 00:02:46.454 LIB libspdk_dma.a 00:02:46.454 CC lib/util/dif.o 00:02:46.454 CC lib/util/fd.o 00:02:46.454 CC lib/util/fd_group.o 00:02:46.454 SO libspdk_dma.so.5.0 00:02:46.454 SYMLINK libspdk_dma.so 00:02:46.454 CC lib/util/file.o 00:02:46.454 CC lib/util/hexlify.o 00:02:46.454 CC lib/util/iov.o 00:02:46.454 LIB libspdk_ioat.a 00:02:46.454 SO libspdk_ioat.so.7.0 00:02:46.454 LIB libspdk_vfio_user.a 00:02:46.454 CC lib/util/math.o 00:02:46.454 CC lib/util/net.o 00:02:46.454 SO libspdk_vfio_user.so.5.0 00:02:46.454 SYMLINK libspdk_ioat.so 00:02:46.454 CC lib/util/pipe.o 00:02:46.454 CC lib/util/strerror_tls.o 00:02:46.454 SYMLINK libspdk_vfio_user.so 00:02:46.454 CC lib/util/string.o 00:02:46.454 CC lib/util/uuid.o 00:02:46.454 CC lib/util/xor.o 00:02:46.454 CC lib/util/zipf.o 00:02:46.454 CC lib/util/md5.o 00:02:47.022 LIB libspdk_util.a 00:02:47.022 SO libspdk_util.so.10.1 00:02:47.022 LIB libspdk_trace_parser.a 00:02:47.022 SO libspdk_trace_parser.so.6.0 00:02:47.281 SYMLINK libspdk_util.so 00:02:47.281 SYMLINK libspdk_trace_parser.so 00:02:47.281 CC lib/idxd/idxd.o 00:02:47.281 CC lib/conf/conf.o 00:02:47.281 CC lib/idxd/idxd_user.o 00:02:47.281 CC lib/idxd/idxd_kernel.o 00:02:47.281 CC lib/vmd/vmd.o 00:02:47.281 CC lib/vmd/led.o 00:02:47.281 CC lib/env_dpdk/env.o 00:02:47.281 CC lib/env_dpdk/memory.o 00:02:47.281 CC lib/rdma_utils/rdma_utils.o 00:02:47.281 CC lib/json/json_parse.o 00:02:47.539 CC lib/json/json_util.o 00:02:47.539 CC lib/json/json_write.o 00:02:47.539 LIB libspdk_conf.a 00:02:47.797 SO libspdk_conf.so.6.0 00:02:47.797 CC lib/env_dpdk/pci.o 00:02:47.797 SYMLINK libspdk_conf.so 00:02:47.797 CC lib/env_dpdk/init.o 00:02:47.797 CC lib/env_dpdk/threads.o 00:02:47.797 LIB libspdk_rdma_utils.a 00:02:47.797 CC lib/env_dpdk/pci_ioat.o 00:02:47.797 SO libspdk_rdma_utils.so.1.0 00:02:47.797 LIB libspdk_json.a 00:02:48.054 SO libspdk_json.so.6.0 00:02:48.054 SYMLINK libspdk_rdma_utils.so 00:02:48.054 CC lib/env_dpdk/pci_virtio.o 00:02:48.054 CC lib/env_dpdk/pci_vmd.o 00:02:48.055 SYMLINK libspdk_json.so 00:02:48.055 CC lib/rdma_provider/common.o 00:02:48.055 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:48.055 CC lib/env_dpdk/pci_idxd.o 00:02:48.313 CC lib/env_dpdk/pci_event.o 00:02:48.313 CC lib/jsonrpc/jsonrpc_server.o 00:02:48.313 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:48.313 LIB libspdk_idxd.a 00:02:48.313 SO libspdk_idxd.so.12.1 00:02:48.313 LIB libspdk_vmd.a 00:02:48.313 CC lib/env_dpdk/sigbus_handler.o 00:02:48.313 SO libspdk_vmd.so.6.0 00:02:48.313 SYMLINK libspdk_idxd.so 00:02:48.313 CC lib/jsonrpc/jsonrpc_client.o 00:02:48.313 CC lib/env_dpdk/pci_dpdk.o 00:02:48.313 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:48.313 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:48.313 SYMLINK libspdk_vmd.so 00:02:48.313 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:48.313 LIB libspdk_rdma_provider.a 00:02:48.571 SO libspdk_rdma_provider.so.7.0 00:02:48.571 SYMLINK libspdk_rdma_provider.so 00:02:48.571 LIB libspdk_jsonrpc.a 00:02:48.829 SO libspdk_jsonrpc.so.6.0 00:02:48.829 SYMLINK libspdk_jsonrpc.so 00:02:49.087 CC lib/rpc/rpc.o 00:02:49.344 LIB libspdk_env_dpdk.a 00:02:49.344 LIB libspdk_rpc.a 00:02:49.344 SO libspdk_rpc.so.6.0 00:02:49.344 SYMLINK libspdk_rpc.so 00:02:49.344 SO libspdk_env_dpdk.so.15.1 00:02:49.602 SYMLINK libspdk_env_dpdk.so 00:02:49.602 CC lib/keyring/keyring.o 00:02:49.602 CC lib/keyring/keyring_rpc.o 00:02:49.602 CC lib/trace/trace.o 00:02:49.602 CC lib/trace/trace_rpc.o 00:02:49.602 CC lib/trace/trace_flags.o 00:02:49.602 CC lib/notify/notify.o 00:02:49.602 CC lib/notify/notify_rpc.o 00:02:49.860 LIB libspdk_notify.a 00:02:49.860 SO libspdk_notify.so.6.0 00:02:50.119 LIB libspdk_trace.a 00:02:50.119 SYMLINK libspdk_notify.so 00:02:50.119 LIB libspdk_keyring.a 00:02:50.119 SO libspdk_trace.so.11.0 00:02:50.119 SO libspdk_keyring.so.2.0 00:02:50.119 SYMLINK libspdk_trace.so 00:02:50.119 SYMLINK libspdk_keyring.so 00:02:50.377 CC lib/sock/sock.o 00:02:50.377 CC lib/thread/thread.o 00:02:50.377 CC lib/sock/sock_rpc.o 00:02:50.377 CC lib/thread/iobuf.o 00:02:50.945 LIB libspdk_sock.a 00:02:50.945 SO libspdk_sock.so.10.0 00:02:51.203 SYMLINK libspdk_sock.so 00:02:51.470 CC lib/nvme/nvme_ctrlr.o 00:02:51.470 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:51.470 CC lib/nvme/nvme_fabric.o 00:02:51.470 CC lib/nvme/nvme_ns_cmd.o 00:02:51.470 CC lib/nvme/nvme_ns.o 00:02:51.470 CC lib/nvme/nvme_pcie_common.o 00:02:51.470 CC lib/nvme/nvme_pcie.o 00:02:51.470 CC lib/nvme/nvme_qpair.o 00:02:51.470 CC lib/nvme/nvme.o 00:02:52.403 CC lib/nvme/nvme_quirks.o 00:02:52.403 CC lib/nvme/nvme_transport.o 00:02:52.403 LIB libspdk_thread.a 00:02:52.403 CC lib/nvme/nvme_discovery.o 00:02:52.403 SO libspdk_thread.so.11.0 00:02:52.685 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.685 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.685 SYMLINK libspdk_thread.so 00:02:52.685 CC lib/nvme/nvme_tcp.o 00:02:52.685 CC lib/nvme/nvme_opal.o 00:02:52.685 CC lib/nvme/nvme_io_msg.o 00:02:52.685 CC lib/nvme/nvme_poll_group.o 00:02:53.251 CC lib/accel/accel.o 00:02:53.251 CC lib/accel/accel_rpc.o 00:02:53.251 CC lib/accel/accel_sw.o 00:02:53.251 CC lib/nvme/nvme_zns.o 00:02:53.251 CC lib/nvme/nvme_stubs.o 00:02:53.508 CC lib/nvme/nvme_auth.o 00:02:53.508 CC lib/nvme/nvme_cuse.o 00:02:53.508 CC lib/blob/blobstore.o 00:02:53.508 CC lib/blob/request.o 00:02:53.765 CC lib/blob/zeroes.o 00:02:53.765 CC lib/blob/blob_bs_dev.o 00:02:54.023 CC lib/nvme/nvme_rdma.o 00:02:54.281 CC lib/init/json_config.o 00:02:54.281 CC lib/virtio/virtio.o 00:02:54.281 CC lib/fsdev/fsdev.o 00:02:54.281 CC lib/init/subsystem.o 00:02:54.538 CC lib/init/subsystem_rpc.o 00:02:54.538 CC lib/init/rpc.o 00:02:54.538 CC lib/fsdev/fsdev_io.o 00:02:54.538 CC lib/virtio/virtio_vhost_user.o 00:02:54.538 CC lib/virtio/virtio_vfio_user.o 00:02:54.538 CC lib/virtio/virtio_pci.o 00:02:54.797 LIB libspdk_init.a 00:02:54.797 SO libspdk_init.so.6.0 00:02:54.797 CC lib/fsdev/fsdev_rpc.o 00:02:54.797 SYMLINK libspdk_init.so 00:02:54.797 LIB libspdk_accel.a 00:02:54.797 SO libspdk_accel.so.16.0 00:02:55.055 SYMLINK libspdk_accel.so 00:02:55.055 CC lib/event/app.o 00:02:55.055 CC lib/event/reactor.o 00:02:55.055 CC lib/event/log_rpc.o 00:02:55.055 CC lib/event/app_rpc.o 00:02:55.055 CC lib/event/scheduler_static.o 00:02:55.055 LIB libspdk_virtio.a 00:02:55.055 SO libspdk_virtio.so.7.0 00:02:55.055 LIB libspdk_fsdev.a 00:02:55.055 CC lib/bdev/bdev.o 00:02:55.055 SO libspdk_fsdev.so.2.0 00:02:55.314 SYMLINK libspdk_virtio.so 00:02:55.314 CC lib/bdev/bdev_rpc.o 00:02:55.314 CC lib/bdev/bdev_zone.o 00:02:55.314 CC lib/bdev/part.o 00:02:55.314 SYMLINK libspdk_fsdev.so 00:02:55.314 CC lib/bdev/scsi_nvme.o 00:02:55.573 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:55.573 LIB libspdk_event.a 00:02:55.573 SO libspdk_event.so.14.0 00:02:55.832 SYMLINK libspdk_event.so 00:02:55.832 LIB libspdk_nvme.a 00:02:56.090 SO libspdk_nvme.so.15.0 00:02:56.348 LIB libspdk_fuse_dispatcher.a 00:02:56.348 SO libspdk_fuse_dispatcher.so.1.0 00:02:56.348 SYMLINK libspdk_fuse_dispatcher.so 00:02:56.607 SYMLINK libspdk_nvme.so 00:02:58.005 LIB libspdk_blob.a 00:02:58.005 SO libspdk_blob.so.12.0 00:02:58.263 SYMLINK libspdk_blob.so 00:02:58.522 CC lib/lvol/lvol.o 00:02:58.522 CC lib/blobfs/blobfs.o 00:02:58.522 CC lib/blobfs/tree.o 00:02:59.090 LIB libspdk_bdev.a 00:02:59.090 SO libspdk_bdev.so.17.0 00:02:59.090 SYMLINK libspdk_bdev.so 00:02:59.349 CC lib/nvmf/ctrlr.o 00:02:59.349 CC lib/nvmf/ctrlr_discovery.o 00:02:59.349 CC lib/nvmf/subsystem.o 00:02:59.349 CC lib/nvmf/ctrlr_bdev.o 00:02:59.349 CC lib/ublk/ublk.o 00:02:59.349 CC lib/scsi/dev.o 00:02:59.349 CC lib/nbd/nbd.o 00:02:59.349 CC lib/ftl/ftl_core.o 00:02:59.608 LIB libspdk_blobfs.a 00:02:59.608 SO libspdk_blobfs.so.11.0 00:02:59.608 CC lib/scsi/lun.o 00:02:59.608 SYMLINK libspdk_blobfs.so 00:02:59.608 CC lib/scsi/port.o 00:02:59.608 LIB libspdk_lvol.a 00:02:59.866 SO libspdk_lvol.so.11.0 00:02:59.866 SYMLINK libspdk_lvol.so 00:02:59.866 CC lib/scsi/scsi.o 00:02:59.866 CC lib/ublk/ublk_rpc.o 00:02:59.866 CC lib/ftl/ftl_init.o 00:02:59.866 CC lib/nbd/nbd_rpc.o 00:03:00.124 CC lib/ftl/ftl_layout.o 00:03:00.124 CC lib/ftl/ftl_debug.o 00:03:00.124 CC lib/ftl/ftl_io.o 00:03:00.124 LIB libspdk_nbd.a 00:03:00.124 CC lib/scsi/scsi_bdev.o 00:03:00.124 CC lib/scsi/scsi_pr.o 00:03:00.124 SO libspdk_nbd.so.7.0 00:03:00.124 LIB libspdk_ublk.a 00:03:00.124 SYMLINK libspdk_nbd.so 00:03:00.124 CC lib/scsi/scsi_rpc.o 00:03:00.382 SO libspdk_ublk.so.3.0 00:03:00.382 CC lib/nvmf/nvmf.o 00:03:00.382 SYMLINK libspdk_ublk.so 00:03:00.382 CC lib/ftl/ftl_sb.o 00:03:00.382 CC lib/ftl/ftl_l2p.o 00:03:00.382 CC lib/ftl/ftl_l2p_flat.o 00:03:00.382 CC lib/ftl/ftl_nv_cache.o 00:03:00.640 CC lib/nvmf/nvmf_rpc.o 00:03:00.640 CC lib/nvmf/transport.o 00:03:00.640 CC lib/ftl/ftl_band.o 00:03:00.640 CC lib/scsi/task.o 00:03:00.640 CC lib/nvmf/tcp.o 00:03:00.899 CC lib/nvmf/stubs.o 00:03:00.899 LIB libspdk_scsi.a 00:03:00.899 SO libspdk_scsi.so.9.0 00:03:01.157 CC lib/nvmf/mdns_server.o 00:03:01.157 CC lib/nvmf/rdma.o 00:03:01.157 SYMLINK libspdk_scsi.so 00:03:01.157 CC lib/nvmf/auth.o 00:03:01.416 CC lib/ftl/ftl_band_ops.o 00:03:01.675 CC lib/ftl/ftl_rq.o 00:03:01.675 CC lib/ftl/ftl_writer.o 00:03:01.675 CC lib/iscsi/conn.o 00:03:01.675 CC lib/vhost/vhost.o 00:03:01.675 CC lib/ftl/ftl_reloc.o 00:03:01.675 CC lib/iscsi/init_grp.o 00:03:01.935 CC lib/ftl/ftl_l2p_cache.o 00:03:01.935 CC lib/ftl/ftl_p2l.o 00:03:01.935 CC lib/ftl/ftl_p2l_log.o 00:03:02.209 CC lib/iscsi/iscsi.o 00:03:02.210 CC lib/iscsi/param.o 00:03:02.210 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.481 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.481 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.481 CC lib/iscsi/portal_grp.o 00:03:02.481 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.481 CC lib/vhost/vhost_rpc.o 00:03:02.739 CC lib/vhost/vhost_scsi.o 00:03:02.740 CC lib/vhost/vhost_blk.o 00:03:02.740 CC lib/iscsi/tgt_node.o 00:03:02.740 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.740 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.998 CC lib/vhost/rte_vhost_user.o 00:03:02.998 CC lib/iscsi/iscsi_subsystem.o 00:03:03.257 CC lib/iscsi/iscsi_rpc.o 00:03:03.257 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:03.257 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.515 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.515 CC lib/iscsi/task.o 00:03:03.515 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.515 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.772 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.772 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:03.772 CC lib/ftl/utils/ftl_conf.o 00:03:03.772 CC lib/ftl/utils/ftl_md.o 00:03:03.772 CC lib/ftl/utils/ftl_mempool.o 00:03:03.772 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.772 CC lib/ftl/utils/ftl_property.o 00:03:04.030 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:04.030 LIB libspdk_iscsi.a 00:03:04.030 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:04.030 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:04.030 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:04.030 LIB libspdk_nvmf.a 00:03:04.030 SO libspdk_iscsi.so.8.0 00:03:04.289 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:04.289 LIB libspdk_vhost.a 00:03:04.289 SO libspdk_nvmf.so.20.0 00:03:04.289 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:04.289 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:04.289 SO libspdk_vhost.so.8.0 00:03:04.289 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:04.289 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:04.289 SYMLINK libspdk_iscsi.so 00:03:04.289 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:04.289 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.289 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:04.547 SYMLINK libspdk_vhost.so 00:03:04.547 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:04.547 CC lib/ftl/base/ftl_base_dev.o 00:03:04.547 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.547 CC lib/ftl/ftl_trace.o 00:03:04.547 SYMLINK libspdk_nvmf.so 00:03:04.805 LIB libspdk_ftl.a 00:03:05.064 SO libspdk_ftl.so.9.0 00:03:05.631 SYMLINK libspdk_ftl.so 00:03:05.889 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.889 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.889 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.889 CC module/accel/error/accel_error.o 00:03:05.890 CC module/fsdev/aio/fsdev_aio.o 00:03:05.890 CC module/blob/bdev/blob_bdev.o 00:03:05.890 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.890 CC module/sock/posix/posix.o 00:03:05.890 CC module/keyring/file/keyring.o 00:03:05.890 CC module/accel/ioat/accel_ioat.o 00:03:05.890 LIB libspdk_env_dpdk_rpc.a 00:03:06.147 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.147 CC module/keyring/file/keyring_rpc.o 00:03:06.147 LIB libspdk_scheduler_dpdk_governor.a 00:03:06.147 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.147 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:06.147 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:06.147 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.147 LIB libspdk_scheduler_dynamic.a 00:03:06.147 LIB libspdk_scheduler_gscheduler.a 00:03:06.147 CC module/accel/error/accel_error_rpc.o 00:03:06.147 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.147 SO libspdk_scheduler_gscheduler.so.4.0 00:03:06.147 LIB libspdk_blob_bdev.a 00:03:06.147 LIB libspdk_keyring_file.a 00:03:06.420 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:06.420 SO libspdk_blob_bdev.so.12.0 00:03:06.420 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.420 SO libspdk_keyring_file.so.2.0 00:03:06.420 SYMLINK libspdk_scheduler_gscheduler.so 00:03:06.420 CC module/fsdev/aio/linux_aio_mgr.o 00:03:06.420 SYMLINK libspdk_blob_bdev.so 00:03:06.420 LIB libspdk_accel_error.a 00:03:06.420 SYMLINK libspdk_keyring_file.so 00:03:06.420 SO libspdk_accel_error.so.2.0 00:03:06.420 LIB libspdk_accel_ioat.a 00:03:06.420 SO libspdk_accel_ioat.so.6.0 00:03:06.420 SYMLINK libspdk_accel_error.so 00:03:06.420 CC module/accel/dsa/accel_dsa.o 00:03:06.420 CC module/keyring/linux/keyring.o 00:03:06.678 CC module/accel/iaa/accel_iaa.o 00:03:06.678 SYMLINK libspdk_accel_ioat.so 00:03:06.678 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.678 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.678 CC module/bdev/delay/vbdev_delay.o 00:03:06.678 CC module/keyring/linux/keyring_rpc.o 00:03:06.678 CC module/blobfs/bdev/blobfs_bdev.o 00:03:06.678 CC module/bdev/error/vbdev_error.o 00:03:06.678 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.936 LIB libspdk_accel_iaa.a 00:03:06.936 LIB libspdk_fsdev_aio.a 00:03:06.936 SO libspdk_accel_iaa.so.3.0 00:03:06.936 LIB libspdk_keyring_linux.a 00:03:06.936 SO libspdk_fsdev_aio.so.1.0 00:03:06.936 LIB libspdk_accel_dsa.a 00:03:06.936 SO libspdk_keyring_linux.so.1.0 00:03:06.936 LIB libspdk_sock_posix.a 00:03:06.936 SO libspdk_accel_dsa.so.5.0 00:03:06.936 SYMLINK libspdk_accel_iaa.so 00:03:06.936 SO libspdk_sock_posix.so.6.0 00:03:06.936 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:06.936 CC module/bdev/gpt/gpt.o 00:03:06.936 SYMLINK libspdk_fsdev_aio.so 00:03:06.936 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.936 SYMLINK libspdk_keyring_linux.so 00:03:06.936 SYMLINK libspdk_accel_dsa.so 00:03:06.936 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.195 SYMLINK libspdk_sock_posix.so 00:03:07.195 LIB libspdk_blobfs_bdev.a 00:03:07.195 LIB libspdk_bdev_delay.a 00:03:07.195 CC module/bdev/malloc/bdev_malloc.o 00:03:07.195 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.195 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.195 SO libspdk_blobfs_bdev.so.6.0 00:03:07.195 SO libspdk_bdev_delay.so.6.0 00:03:07.195 CC module/bdev/null/bdev_null.o 00:03:07.195 LIB libspdk_bdev_error.a 00:03:07.195 SYMLINK libspdk_blobfs_bdev.so 00:03:07.452 CC module/bdev/nvme/bdev_nvme.o 00:03:07.452 CC module/bdev/null/bdev_null_rpc.o 00:03:07.452 LIB libspdk_bdev_gpt.a 00:03:07.452 SYMLINK libspdk_bdev_delay.so 00:03:07.452 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.452 SO libspdk_bdev_gpt.so.6.0 00:03:07.452 SO libspdk_bdev_error.so.6.0 00:03:07.452 CC module/bdev/nvme/nvme_rpc.o 00:03:07.452 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.452 SYMLINK libspdk_bdev_error.so 00:03:07.452 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.452 SYMLINK libspdk_bdev_gpt.so 00:03:07.452 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.452 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.710 LIB libspdk_bdev_malloc.a 00:03:07.710 CC module/bdev/nvme/vbdev_opal.o 00:03:07.710 SO libspdk_bdev_malloc.so.6.0 00:03:07.710 SYMLINK libspdk_bdev_malloc.so 00:03:07.969 LIB libspdk_bdev_passthru.a 00:03:07.969 LIB libspdk_bdev_null.a 00:03:07.969 SO libspdk_bdev_passthru.so.6.0 00:03:07.969 SO libspdk_bdev_null.so.6.0 00:03:07.969 CC module/bdev/raid/bdev_raid.o 00:03:07.969 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.969 CC module/bdev/split/vbdev_split.o 00:03:07.969 SYMLINK libspdk_bdev_passthru.so 00:03:07.969 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.969 SYMLINK libspdk_bdev_null.so 00:03:07.969 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:07.969 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.969 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.228 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.486 LIB libspdk_bdev_lvol.a 00:03:08.486 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.486 CC module/bdev/raid/raid0.o 00:03:08.486 SO libspdk_bdev_lvol.so.6.0 00:03:08.486 LIB libspdk_bdev_split.a 00:03:08.486 SO libspdk_bdev_split.so.6.0 00:03:08.486 SYMLINK libspdk_bdev_lvol.so 00:03:08.486 LIB libspdk_bdev_zone_block.a 00:03:08.745 SYMLINK libspdk_bdev_split.so 00:03:08.745 SO libspdk_bdev_zone_block.so.6.0 00:03:08.745 CC module/bdev/raid/raid1.o 00:03:08.745 CC module/bdev/aio/bdev_aio.o 00:03:08.745 CC module/bdev/ftl/bdev_ftl.o 00:03:08.745 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.745 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.745 SYMLINK libspdk_bdev_zone_block.so 00:03:08.745 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.745 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.745 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.003 CC module/bdev/raid/concat.o 00:03:09.003 CC module/bdev/raid/raid5f.o 00:03:09.003 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:09.003 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:09.003 LIB libspdk_bdev_ftl.a 00:03:09.003 LIB libspdk_bdev_aio.a 00:03:09.003 SO libspdk_bdev_ftl.so.6.0 00:03:09.003 SO libspdk_bdev_aio.so.6.0 00:03:09.261 LIB libspdk_bdev_iscsi.a 00:03:09.261 SYMLINK libspdk_bdev_ftl.so 00:03:09.261 SO libspdk_bdev_iscsi.so.6.0 00:03:09.261 SYMLINK libspdk_bdev_aio.so 00:03:09.261 SYMLINK libspdk_bdev_iscsi.so 00:03:09.542 LIB libspdk_bdev_virtio.a 00:03:09.542 SO libspdk_bdev_virtio.so.6.0 00:03:09.542 SYMLINK libspdk_bdev_virtio.so 00:03:09.542 LIB libspdk_bdev_raid.a 00:03:09.800 SO libspdk_bdev_raid.so.6.0 00:03:09.800 SYMLINK libspdk_bdev_raid.so 00:03:11.177 LIB libspdk_bdev_nvme.a 00:03:11.177 SO libspdk_bdev_nvme.so.7.1 00:03:11.435 SYMLINK libspdk_bdev_nvme.so 00:03:11.693 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.693 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.693 CC module/event/subsystems/vmd/vmd.o 00:03:11.693 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.693 CC module/event/subsystems/keyring/keyring.o 00:03:11.693 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.693 CC module/event/subsystems/sock/sock.o 00:03:11.693 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.951 CC module/event/subsystems/fsdev/fsdev.o 00:03:11.951 LIB libspdk_event_vhost_blk.a 00:03:11.951 LIB libspdk_event_scheduler.a 00:03:11.951 LIB libspdk_event_vmd.a 00:03:11.951 SO libspdk_event_vhost_blk.so.3.0 00:03:11.951 LIB libspdk_event_keyring.a 00:03:11.951 LIB libspdk_event_sock.a 00:03:11.951 SO libspdk_event_scheduler.so.4.0 00:03:11.951 SO libspdk_event_vmd.so.6.0 00:03:11.951 SO libspdk_event_keyring.so.1.0 00:03:11.951 SO libspdk_event_sock.so.5.0 00:03:11.951 LIB libspdk_event_fsdev.a 00:03:11.951 LIB libspdk_event_iobuf.a 00:03:11.951 SYMLINK libspdk_event_vhost_blk.so 00:03:11.951 SO libspdk_event_fsdev.so.1.0 00:03:11.951 SO libspdk_event_iobuf.so.3.0 00:03:11.951 SYMLINK libspdk_event_scheduler.so 00:03:11.951 SYMLINK libspdk_event_keyring.so 00:03:11.951 SYMLINK libspdk_event_sock.so 00:03:11.951 SYMLINK libspdk_event_vmd.so 00:03:12.210 SYMLINK libspdk_event_fsdev.so 00:03:12.210 SYMLINK libspdk_event_iobuf.so 00:03:12.469 CC module/event/subsystems/accel/accel.o 00:03:12.469 LIB libspdk_event_accel.a 00:03:12.728 SO libspdk_event_accel.so.6.0 00:03:12.728 SYMLINK libspdk_event_accel.so 00:03:12.988 CC module/event/subsystems/bdev/bdev.o 00:03:13.247 LIB libspdk_event_bdev.a 00:03:13.247 SO libspdk_event_bdev.so.6.0 00:03:13.247 SYMLINK libspdk_event_bdev.so 00:03:13.505 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.505 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.505 CC module/event/subsystems/scsi/scsi.o 00:03:13.505 CC module/event/subsystems/nbd/nbd.o 00:03:13.505 CC module/event/subsystems/ublk/ublk.o 00:03:13.764 LIB libspdk_event_nbd.a 00:03:13.764 LIB libspdk_event_ublk.a 00:03:13.764 SO libspdk_event_nbd.so.6.0 00:03:13.764 SO libspdk_event_ublk.so.3.0 00:03:13.764 LIB libspdk_event_scsi.a 00:03:13.764 SO libspdk_event_scsi.so.6.0 00:03:13.764 SYMLINK libspdk_event_nbd.so 00:03:13.764 SYMLINK libspdk_event_ublk.so 00:03:13.764 LIB libspdk_event_nvmf.a 00:03:13.764 SYMLINK libspdk_event_scsi.so 00:03:14.023 SO libspdk_event_nvmf.so.6.0 00:03:14.023 SYMLINK libspdk_event_nvmf.so 00:03:14.023 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.023 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.281 LIB libspdk_event_vhost_scsi.a 00:03:14.281 LIB libspdk_event_iscsi.a 00:03:14.281 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.281 SO libspdk_event_iscsi.so.6.0 00:03:14.540 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.540 SYMLINK libspdk_event_iscsi.so 00:03:14.540 SO libspdk.so.6.0 00:03:14.540 SYMLINK libspdk.so 00:03:14.799 CXX app/trace/trace.o 00:03:14.799 TEST_HEADER include/spdk/accel.h 00:03:14.799 TEST_HEADER include/spdk/accel_module.h 00:03:14.799 TEST_HEADER include/spdk/assert.h 00:03:14.799 TEST_HEADER include/spdk/barrier.h 00:03:14.799 TEST_HEADER include/spdk/base64.h 00:03:14.799 TEST_HEADER include/spdk/bdev.h 00:03:14.799 CC app/trace_record/trace_record.o 00:03:14.799 TEST_HEADER include/spdk/bdev_module.h 00:03:14.799 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.799 TEST_HEADER include/spdk/bit_array.h 00:03:14.799 TEST_HEADER include/spdk/bit_pool.h 00:03:14.799 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.799 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.799 TEST_HEADER include/spdk/blobfs.h 00:03:14.799 TEST_HEADER include/spdk/blob.h 00:03:14.799 TEST_HEADER include/spdk/conf.h 00:03:14.799 TEST_HEADER include/spdk/config.h 00:03:14.799 TEST_HEADER include/spdk/cpuset.h 00:03:14.799 TEST_HEADER include/spdk/crc16.h 00:03:14.799 TEST_HEADER include/spdk/crc32.h 00:03:14.799 TEST_HEADER include/spdk/crc64.h 00:03:14.799 TEST_HEADER include/spdk/dif.h 00:03:14.799 TEST_HEADER include/spdk/dma.h 00:03:14.799 TEST_HEADER include/spdk/endian.h 00:03:14.799 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.799 TEST_HEADER include/spdk/env.h 00:03:14.799 TEST_HEADER include/spdk/event.h 00:03:14.799 TEST_HEADER include/spdk/fd_group.h 00:03:14.799 CC app/nvmf_tgt/nvmf_main.o 00:03:14.799 TEST_HEADER include/spdk/fd.h 00:03:14.799 TEST_HEADER include/spdk/file.h 00:03:14.799 TEST_HEADER include/spdk/fsdev.h 00:03:14.799 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.058 TEST_HEADER include/spdk/fsdev_module.h 00:03:15.058 TEST_HEADER include/spdk/ftl.h 00:03:15.058 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:15.058 CC app/spdk_tgt/spdk_tgt.o 00:03:15.058 TEST_HEADER include/spdk/gpt_spec.h 00:03:15.058 TEST_HEADER include/spdk/hexlify.h 00:03:15.058 TEST_HEADER include/spdk/histogram_data.h 00:03:15.058 TEST_HEADER include/spdk/idxd.h 00:03:15.058 TEST_HEADER include/spdk/idxd_spec.h 00:03:15.058 TEST_HEADER include/spdk/init.h 00:03:15.058 TEST_HEADER include/spdk/ioat.h 00:03:15.058 TEST_HEADER include/spdk/ioat_spec.h 00:03:15.058 TEST_HEADER include/spdk/iscsi_spec.h 00:03:15.058 CC test/thread/poller_perf/poller_perf.o 00:03:15.058 TEST_HEADER include/spdk/json.h 00:03:15.058 TEST_HEADER include/spdk/jsonrpc.h 00:03:15.058 TEST_HEADER include/spdk/keyring.h 00:03:15.058 CC examples/util/zipf/zipf.o 00:03:15.058 TEST_HEADER include/spdk/keyring_module.h 00:03:15.058 TEST_HEADER include/spdk/likely.h 00:03:15.058 TEST_HEADER include/spdk/log.h 00:03:15.058 TEST_HEADER include/spdk/lvol.h 00:03:15.058 TEST_HEADER include/spdk/md5.h 00:03:15.058 TEST_HEADER include/spdk/memory.h 00:03:15.058 TEST_HEADER include/spdk/mmio.h 00:03:15.058 TEST_HEADER include/spdk/nbd.h 00:03:15.058 TEST_HEADER include/spdk/net.h 00:03:15.058 TEST_HEADER include/spdk/notify.h 00:03:15.058 TEST_HEADER include/spdk/nvme.h 00:03:15.058 TEST_HEADER include/spdk/nvme_intel.h 00:03:15.058 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:15.058 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:15.058 TEST_HEADER include/spdk/nvme_spec.h 00:03:15.058 CC test/app/bdev_svc/bdev_svc.o 00:03:15.058 TEST_HEADER include/spdk/nvme_zns.h 00:03:15.058 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:15.058 CC test/dma/test_dma/test_dma.o 00:03:15.058 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:15.058 TEST_HEADER include/spdk/nvmf.h 00:03:15.058 TEST_HEADER include/spdk/nvmf_spec.h 00:03:15.058 TEST_HEADER include/spdk/nvmf_transport.h 00:03:15.058 TEST_HEADER include/spdk/opal.h 00:03:15.058 TEST_HEADER include/spdk/opal_spec.h 00:03:15.058 TEST_HEADER include/spdk/pci_ids.h 00:03:15.058 TEST_HEADER include/spdk/pipe.h 00:03:15.058 TEST_HEADER include/spdk/queue.h 00:03:15.058 TEST_HEADER include/spdk/reduce.h 00:03:15.058 TEST_HEADER include/spdk/rpc.h 00:03:15.058 TEST_HEADER include/spdk/scheduler.h 00:03:15.058 TEST_HEADER include/spdk/scsi.h 00:03:15.058 TEST_HEADER include/spdk/scsi_spec.h 00:03:15.058 TEST_HEADER include/spdk/sock.h 00:03:15.058 TEST_HEADER include/spdk/stdinc.h 00:03:15.058 TEST_HEADER include/spdk/string.h 00:03:15.058 TEST_HEADER include/spdk/thread.h 00:03:15.058 TEST_HEADER include/spdk/trace.h 00:03:15.058 TEST_HEADER include/spdk/trace_parser.h 00:03:15.058 TEST_HEADER include/spdk/tree.h 00:03:15.058 TEST_HEADER include/spdk/ublk.h 00:03:15.058 TEST_HEADER include/spdk/util.h 00:03:15.058 TEST_HEADER include/spdk/uuid.h 00:03:15.058 TEST_HEADER include/spdk/version.h 00:03:15.058 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:15.058 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:15.058 TEST_HEADER include/spdk/vhost.h 00:03:15.058 TEST_HEADER include/spdk/vmd.h 00:03:15.058 TEST_HEADER include/spdk/xor.h 00:03:15.058 TEST_HEADER include/spdk/zipf.h 00:03:15.058 CXX test/cpp_headers/accel.o 00:03:15.058 LINK poller_perf 00:03:15.317 LINK nvmf_tgt 00:03:15.317 LINK iscsi_tgt 00:03:15.317 LINK spdk_trace_record 00:03:15.317 LINK zipf 00:03:15.317 LINK spdk_tgt 00:03:15.317 LINK bdev_svc 00:03:15.317 CXX test/cpp_headers/accel_module.o 00:03:15.575 CXX test/cpp_headers/assert.o 00:03:15.575 LINK spdk_trace 00:03:15.575 CC test/rpc_client/rpc_client_test.o 00:03:15.575 CC test/event/event_perf/event_perf.o 00:03:15.575 CC app/spdk_lspci/spdk_lspci.o 00:03:15.575 CXX test/cpp_headers/barrier.o 00:03:15.575 CC app/spdk_nvme_perf/perf.o 00:03:15.575 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.575 CC examples/ioat/perf/perf.o 00:03:15.835 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.835 LINK rpc_client_test 00:03:15.835 LINK spdk_lspci 00:03:15.835 LINK test_dma 00:03:15.835 LINK event_perf 00:03:15.835 CC examples/ioat/verify/verify.o 00:03:15.835 CXX test/cpp_headers/base64.o 00:03:15.835 LINK ioat_perf 00:03:16.093 CXX test/cpp_headers/bdev.o 00:03:16.093 CC test/app/histogram_perf/histogram_perf.o 00:03:16.093 CC test/event/reactor/reactor.o 00:03:16.093 CC test/event/reactor_perf/reactor_perf.o 00:03:16.093 LINK verify 00:03:16.093 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.093 LINK histogram_perf 00:03:16.351 LINK reactor 00:03:16.351 LINK reactor_perf 00:03:16.351 CXX test/cpp_headers/bdev_module.o 00:03:16.351 LINK nvme_fuzz 00:03:16.351 LINK mem_callbacks 00:03:16.351 CC test/accel/dif/dif.o 00:03:16.351 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.609 CC test/env/vtophys/vtophys.o 00:03:16.610 CXX test/cpp_headers/bdev_zone.o 00:03:16.610 CC test/app/jsoncat/jsoncat.o 00:03:16.610 CC test/event/app_repeat/app_repeat.o 00:03:16.610 CC app/spdk_nvme_identify/identify.o 00:03:16.610 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.610 LINK lsvmd 00:03:16.610 LINK jsoncat 00:03:16.610 LINK vtophys 00:03:16.610 LINK app_repeat 00:03:16.610 CXX test/cpp_headers/bit_array.o 00:03:16.867 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.867 CXX test/cpp_headers/bit_pool.o 00:03:16.867 LINK spdk_nvme_perf 00:03:16.867 CC examples/vmd/led/led.o 00:03:16.867 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.126 CC test/env/memory/memory_ut.o 00:03:17.126 CXX test/cpp_headers/blob_bdev.o 00:03:17.126 CC test/event/scheduler/scheduler.o 00:03:17.126 LINK led 00:03:17.126 LINK env_dpdk_post_init 00:03:17.126 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.385 LINK vhost_fuzz 00:03:17.385 LINK dif 00:03:17.385 LINK scheduler 00:03:17.385 CC test/env/pci/pci_ut.o 00:03:17.385 CXX test/cpp_headers/blobfs.o 00:03:17.712 CC examples/idxd/perf/perf.o 00:03:17.713 CXX test/cpp_headers/blob.o 00:03:17.713 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:17.713 CC examples/sock/hello_world/hello_sock.o 00:03:17.713 CC examples/thread/thread/thread_ex.o 00:03:17.713 CXX test/cpp_headers/conf.o 00:03:17.989 LINK interrupt_tgt 00:03:17.989 LINK spdk_nvme_identify 00:03:17.989 LINK idxd_perf 00:03:17.989 LINK pci_ut 00:03:17.989 CC test/blobfs/mkfs/mkfs.o 00:03:17.989 CXX test/cpp_headers/config.o 00:03:17.989 CXX test/cpp_headers/cpuset.o 00:03:17.989 CXX test/cpp_headers/crc16.o 00:03:17.989 LINK hello_sock 00:03:18.247 CXX test/cpp_headers/crc32.o 00:03:18.247 LINK thread 00:03:18.247 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.247 CXX test/cpp_headers/crc64.o 00:03:18.247 CXX test/cpp_headers/dif.o 00:03:18.247 LINK mkfs 00:03:18.247 CC test/app/stub/stub.o 00:03:18.247 CXX test/cpp_headers/dma.o 00:03:18.506 LINK iscsi_fuzz 00:03:18.506 LINK spdk_nvme_discover 00:03:18.506 CC examples/nvme/hello_world/hello_world.o 00:03:18.506 LINK stub 00:03:18.506 LINK memory_ut 00:03:18.506 CXX test/cpp_headers/endian.o 00:03:18.765 CC test/nvme/aer/aer.o 00:03:18.765 CC test/nvme/reset/reset.o 00:03:18.765 CC test/lvol/esnap/esnap.o 00:03:18.765 CC app/spdk_top/spdk_top.o 00:03:18.765 CC test/bdev/bdevio/bdevio.o 00:03:18.765 LINK hello_world 00:03:18.765 CC test/nvme/sgl/sgl.o 00:03:18.765 CXX test/cpp_headers/env_dpdk.o 00:03:18.765 CC test/nvme/e2edp/nvme_dp.o 00:03:19.023 CC app/vhost/vhost.o 00:03:19.023 LINK reset 00:03:19.023 CXX test/cpp_headers/env.o 00:03:19.023 LINK aer 00:03:19.023 CC examples/nvme/reconnect/reconnect.o 00:03:19.023 LINK sgl 00:03:19.283 LINK vhost 00:03:19.283 CXX test/cpp_headers/event.o 00:03:19.283 LINK nvme_dp 00:03:19.283 CXX test/cpp_headers/fd_group.o 00:03:19.283 LINK bdevio 00:03:19.283 CC test/nvme/overhead/overhead.o 00:03:19.542 CXX test/cpp_headers/fd.o 00:03:19.542 CC test/nvme/err_injection/err_injection.o 00:03:19.542 CC test/nvme/reserve/reserve.o 00:03:19.542 CC test/nvme/startup/startup.o 00:03:19.542 CC test/nvme/simple_copy/simple_copy.o 00:03:19.542 LINK reconnect 00:03:19.542 CC test/nvme/connect_stress/connect_stress.o 00:03:19.542 CXX test/cpp_headers/file.o 00:03:19.800 LINK startup 00:03:19.800 LINK err_injection 00:03:19.800 LINK overhead 00:03:19.800 LINK reserve 00:03:19.800 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.800 LINK simple_copy 00:03:19.800 LINK connect_stress 00:03:19.800 CXX test/cpp_headers/fsdev.o 00:03:19.800 CXX test/cpp_headers/fsdev_module.o 00:03:19.800 CXX test/cpp_headers/ftl.o 00:03:20.058 CXX test/cpp_headers/fuse_dispatcher.o 00:03:20.058 CC test/nvme/boot_partition/boot_partition.o 00:03:20.058 CC test/nvme/compliance/nvme_compliance.o 00:03:20.058 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.058 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.058 CXX test/cpp_headers/gpt_spec.o 00:03:20.316 CC test/nvme/fdp/fdp.o 00:03:20.316 LINK spdk_top 00:03:20.316 CC test/nvme/cuse/cuse.o 00:03:20.316 LINK boot_partition 00:03:20.316 CXX test/cpp_headers/hexlify.o 00:03:20.316 LINK fused_ordering 00:03:20.316 LINK doorbell_aers 00:03:20.574 LINK nvme_manage 00:03:20.575 LINK nvme_compliance 00:03:20.575 CC app/spdk_dd/spdk_dd.o 00:03:20.575 CXX test/cpp_headers/histogram_data.o 00:03:20.575 LINK fdp 00:03:20.575 CC examples/nvme/arbitration/arbitration.o 00:03:20.834 CXX test/cpp_headers/idxd.o 00:03:20.834 CC examples/accel/perf/accel_perf.o 00:03:20.834 CC examples/nvme/hotplug/hotplug.o 00:03:20.834 CC app/fio/nvme/fio_plugin.o 00:03:20.834 CXX test/cpp_headers/idxd_spec.o 00:03:21.093 CC app/fio/bdev/fio_plugin.o 00:03:21.093 CC examples/blob/hello_world/hello_blob.o 00:03:21.093 LINK hotplug 00:03:21.093 LINK arbitration 00:03:21.093 LINK spdk_dd 00:03:21.093 CXX test/cpp_headers/init.o 00:03:21.352 LINK hello_blob 00:03:21.352 CXX test/cpp_headers/ioat.o 00:03:21.352 CXX test/cpp_headers/ioat_spec.o 00:03:21.352 CXX test/cpp_headers/iscsi_spec.o 00:03:21.352 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.352 LINK accel_perf 00:03:21.352 CXX test/cpp_headers/json.o 00:03:21.635 CXX test/cpp_headers/jsonrpc.o 00:03:21.635 LINK spdk_nvme 00:03:21.635 CXX test/cpp_headers/keyring.o 00:03:21.635 CC examples/blob/cli/blobcli.o 00:03:21.635 LINK cmb_copy 00:03:21.635 LINK spdk_bdev 00:03:21.635 CXX test/cpp_headers/keyring_module.o 00:03:21.635 CC examples/nvme/abort/abort.o 00:03:21.904 CXX test/cpp_headers/likely.o 00:03:21.904 CXX test/cpp_headers/log.o 00:03:21.904 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.904 CXX test/cpp_headers/lvol.o 00:03:21.904 LINK cuse 00:03:21.904 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:21.904 CXX test/cpp_headers/md5.o 00:03:21.904 CXX test/cpp_headers/memory.o 00:03:22.164 LINK pmr_persistence 00:03:22.164 CC examples/bdev/hello_world/hello_bdev.o 00:03:22.164 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.164 CXX test/cpp_headers/mmio.o 00:03:22.164 CXX test/cpp_headers/nbd.o 00:03:22.164 CXX test/cpp_headers/net.o 00:03:22.164 LINK abort 00:03:22.164 CXX test/cpp_headers/notify.o 00:03:22.164 CXX test/cpp_headers/nvme.o 00:03:22.164 LINK blobcli 00:03:22.164 LINK hello_fsdev 00:03:22.422 CXX test/cpp_headers/nvme_intel.o 00:03:22.422 CXX test/cpp_headers/nvme_ocssd.o 00:03:22.422 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:22.422 LINK hello_bdev 00:03:22.422 CXX test/cpp_headers/nvme_spec.o 00:03:22.422 CXX test/cpp_headers/nvme_zns.o 00:03:22.422 CXX test/cpp_headers/nvmf_cmd.o 00:03:22.422 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:22.681 CXX test/cpp_headers/nvmf.o 00:03:22.681 CXX test/cpp_headers/nvmf_spec.o 00:03:22.681 CXX test/cpp_headers/nvmf_transport.o 00:03:22.681 CXX test/cpp_headers/opal.o 00:03:22.681 CXX test/cpp_headers/opal_spec.o 00:03:22.681 CXX test/cpp_headers/pci_ids.o 00:03:22.681 CXX test/cpp_headers/pipe.o 00:03:22.681 CXX test/cpp_headers/queue.o 00:03:22.681 CXX test/cpp_headers/reduce.o 00:03:22.681 CXX test/cpp_headers/rpc.o 00:03:22.939 CXX test/cpp_headers/scheduler.o 00:03:22.939 CXX test/cpp_headers/scsi.o 00:03:22.939 CXX test/cpp_headers/scsi_spec.o 00:03:22.939 CXX test/cpp_headers/sock.o 00:03:22.939 CXX test/cpp_headers/stdinc.o 00:03:22.939 CXX test/cpp_headers/string.o 00:03:22.939 CXX test/cpp_headers/thread.o 00:03:22.939 CXX test/cpp_headers/trace.o 00:03:22.939 CXX test/cpp_headers/trace_parser.o 00:03:22.939 CXX test/cpp_headers/tree.o 00:03:22.939 CXX test/cpp_headers/ublk.o 00:03:22.939 CXX test/cpp_headers/util.o 00:03:23.198 CXX test/cpp_headers/uuid.o 00:03:23.198 CXX test/cpp_headers/version.o 00:03:23.198 CXX test/cpp_headers/vfio_user_pci.o 00:03:23.198 CXX test/cpp_headers/vfio_user_spec.o 00:03:23.198 CXX test/cpp_headers/vhost.o 00:03:23.198 CXX test/cpp_headers/vmd.o 00:03:23.198 CXX test/cpp_headers/xor.o 00:03:23.198 CXX test/cpp_headers/zipf.o 00:03:23.198 LINK bdevperf 00:03:23.765 CC examples/nvmf/nvmf/nvmf.o 00:03:24.332 LINK nvmf 00:03:26.863 LINK esnap 00:03:26.863 00:03:26.863 real 1m40.900s 00:03:26.863 user 9m20.255s 00:03:26.863 sys 1m48.359s 00:03:26.863 ************************************ 00:03:26.863 END TEST make 00:03:26.863 ************************************ 00:03:26.863 18:50:18 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:26.863 18:50:18 make -- common/autotest_common.sh@10 -- $ set +x 00:03:26.863 18:50:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:26.864 18:50:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:26.864 18:50:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:26.864 18:50:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.864 18:50:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:26.864 18:50:18 -- pm/common@44 -- $ pid=5255 00:03:26.864 18:50:18 -- pm/common@50 -- $ kill -TERM 5255 00:03:26.864 18:50:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.864 18:50:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:26.864 18:50:18 -- pm/common@44 -- $ pid=5257 00:03:26.864 18:50:18 -- pm/common@50 -- $ kill -TERM 5257 00:03:26.864 18:50:18 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:26.864 18:50:18 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:27.123 18:50:18 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:27.123 18:50:18 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:27.123 18:50:18 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:27.123 18:50:18 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:27.123 18:50:18 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:27.123 18:50:18 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:27.123 18:50:18 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:27.123 18:50:18 -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.123 18:50:18 -- scripts/common.sh@336 -- # read -ra ver1 00:03:27.123 18:50:18 -- scripts/common.sh@337 -- # IFS=.-: 00:03:27.123 18:50:18 -- scripts/common.sh@337 -- # read -ra ver2 00:03:27.123 18:50:18 -- scripts/common.sh@338 -- # local 'op=<' 00:03:27.123 18:50:18 -- scripts/common.sh@340 -- # ver1_l=2 00:03:27.123 18:50:18 -- scripts/common.sh@341 -- # ver2_l=1 00:03:27.123 18:50:18 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:27.123 18:50:18 -- scripts/common.sh@344 -- # case "$op" in 00:03:27.123 18:50:18 -- scripts/common.sh@345 -- # : 1 00:03:27.123 18:50:18 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:27.123 18:50:18 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.123 18:50:18 -- scripts/common.sh@365 -- # decimal 1 00:03:27.123 18:50:18 -- scripts/common.sh@353 -- # local d=1 00:03:27.123 18:50:18 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.123 18:50:18 -- scripts/common.sh@355 -- # echo 1 00:03:27.123 18:50:18 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:27.123 18:50:18 -- scripts/common.sh@366 -- # decimal 2 00:03:27.123 18:50:18 -- scripts/common.sh@353 -- # local d=2 00:03:27.123 18:50:18 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.123 18:50:18 -- scripts/common.sh@355 -- # echo 2 00:03:27.123 18:50:18 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:27.123 18:50:18 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:27.123 18:50:18 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:27.123 18:50:18 -- scripts/common.sh@368 -- # return 0 00:03:27.123 18:50:18 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.123 18:50:18 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:27.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.123 --rc genhtml_branch_coverage=1 00:03:27.123 --rc genhtml_function_coverage=1 00:03:27.123 --rc genhtml_legend=1 00:03:27.123 --rc geninfo_all_blocks=1 00:03:27.123 --rc geninfo_unexecuted_blocks=1 00:03:27.123 00:03:27.123 ' 00:03:27.123 18:50:18 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:27.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.123 --rc genhtml_branch_coverage=1 00:03:27.123 --rc genhtml_function_coverage=1 00:03:27.123 --rc genhtml_legend=1 00:03:27.123 --rc geninfo_all_blocks=1 00:03:27.123 --rc geninfo_unexecuted_blocks=1 00:03:27.123 00:03:27.123 ' 00:03:27.123 18:50:18 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:27.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.123 --rc genhtml_branch_coverage=1 00:03:27.123 --rc genhtml_function_coverage=1 00:03:27.123 --rc genhtml_legend=1 00:03:27.123 --rc geninfo_all_blocks=1 00:03:27.123 --rc geninfo_unexecuted_blocks=1 00:03:27.123 00:03:27.123 ' 00:03:27.123 18:50:18 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:27.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.123 --rc genhtml_branch_coverage=1 00:03:27.123 --rc genhtml_function_coverage=1 00:03:27.123 --rc genhtml_legend=1 00:03:27.123 --rc geninfo_all_blocks=1 00:03:27.123 --rc geninfo_unexecuted_blocks=1 00:03:27.123 00:03:27.123 ' 00:03:27.123 18:50:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:27.124 18:50:18 -- nvmf/common.sh@7 -- # uname -s 00:03:27.124 18:50:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:27.124 18:50:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:27.124 18:50:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:27.124 18:50:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:27.124 18:50:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:27.124 18:50:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:27.124 18:50:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:27.124 18:50:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:27.124 18:50:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:27.124 18:50:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:27.124 18:50:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:41809d3e-876d-42b7-b00f-49485f9c796b 00:03:27.124 18:50:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=41809d3e-876d-42b7-b00f-49485f9c796b 00:03:27.124 18:50:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:27.124 18:50:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:27.124 18:50:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:27.124 18:50:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:27.124 18:50:18 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:27.124 18:50:18 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:27.124 18:50:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:27.124 18:50:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:27.124 18:50:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:27.124 18:50:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.124 18:50:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.124 18:50:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.124 18:50:18 -- paths/export.sh@5 -- # export PATH 00:03:27.124 18:50:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.124 18:50:18 -- nvmf/common.sh@51 -- # : 0 00:03:27.124 18:50:18 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:27.124 18:50:18 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:27.124 18:50:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:27.124 18:50:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:27.124 18:50:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:27.124 18:50:18 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:27.124 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:27.124 18:50:18 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:27.124 18:50:18 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:27.124 18:50:18 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:27.124 18:50:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:27.124 18:50:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:27.124 18:50:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:27.124 18:50:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:27.124 18:50:18 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.124 18:50:18 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:27.124 18:50:18 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.124 18:50:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:27.384 18:50:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:27.384 18:50:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:27.384 18:50:18 -- spdk/autotest.sh@48 -- # udevadm_pid=54364 00:03:27.384 18:50:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:27.384 18:50:18 -- pm/common@17 -- # local monitor 00:03:27.384 18:50:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:27.384 18:50:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.384 18:50:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.384 18:50:18 -- pm/common@25 -- # sleep 1 00:03:27.384 18:50:18 -- pm/common@21 -- # date +%s 00:03:27.384 18:50:18 -- pm/common@21 -- # date +%s 00:03:27.384 18:50:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732647018 00:03:27.384 18:50:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732647018 00:03:27.384 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732647018_collect-vmstat.pm.log 00:03:27.384 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732647018_collect-cpu-load.pm.log 00:03:28.323 18:50:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:28.323 18:50:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:28.323 18:50:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:28.323 18:50:19 -- common/autotest_common.sh@10 -- # set +x 00:03:28.323 18:50:19 -- spdk/autotest.sh@59 -- # create_test_list 00:03:28.323 18:50:19 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:28.323 18:50:19 -- common/autotest_common.sh@10 -- # set +x 00:03:28.323 18:50:19 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:28.323 18:50:19 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:28.323 18:50:19 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:28.323 18:50:19 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:28.323 18:50:19 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:28.323 18:50:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:28.323 18:50:19 -- common/autotest_common.sh@1457 -- # uname 00:03:28.323 18:50:19 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:28.323 18:50:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:28.323 18:50:19 -- common/autotest_common.sh@1477 -- # uname 00:03:28.323 18:50:19 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:28.323 18:50:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:28.323 18:50:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:28.323 lcov: LCOV version 1.15 00:03:28.323 18:50:19 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:46.472 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:46.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:04.653 18:50:54 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:04.653 18:50:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.653 18:50:54 -- common/autotest_common.sh@10 -- # set +x 00:04:04.653 18:50:54 -- spdk/autotest.sh@78 -- # rm -f 00:04:04.653 18:50:54 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.653 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:04.653 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:04.653 18:50:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:04.653 18:50:54 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:04.653 18:50:54 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:04.653 18:50:54 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:04.653 18:50:54 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.653 18:50:54 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:04.653 18:50:54 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:04.653 18:50:54 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.653 18:50:54 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.653 18:50:54 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.653 18:50:54 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:04.653 18:50:54 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:04.653 18:50:54 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:04.653 18:50:54 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.653 18:50:54 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.653 18:50:54 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:04.653 18:50:54 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:04.653 18:50:54 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:04.653 18:50:54 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.653 18:50:54 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.653 18:50:54 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:04.653 18:50:54 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:04.653 18:50:54 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:04.653 18:50:54 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.653 18:50:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:04.653 18:50:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.653 18:50:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.653 18:50:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:04.653 18:50:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:04.653 18:50:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:04.653 No valid GPT data, bailing 00:04:04.653 18:50:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:04.653 18:50:54 -- scripts/common.sh@394 -- # pt= 00:04:04.653 18:50:54 -- scripts/common.sh@395 -- # return 1 00:04:04.653 18:50:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:04.653 1+0 records in 00:04:04.653 1+0 records out 00:04:04.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504796 s, 208 MB/s 00:04:04.653 18:50:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.653 18:50:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.653 18:50:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:04.653 18:50:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:04.653 18:50:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:04.653 No valid GPT data, bailing 00:04:04.653 18:50:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:04.653 18:50:54 -- scripts/common.sh@394 -- # pt= 00:04:04.653 18:50:54 -- scripts/common.sh@395 -- # return 1 00:04:04.653 18:50:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:04.653 1+0 records in 00:04:04.653 1+0 records out 00:04:04.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587125 s, 179 MB/s 00:04:04.653 18:50:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.653 18:50:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.653 18:50:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:04.653 18:50:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:04.653 18:50:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:04.653 No valid GPT data, bailing 00:04:04.653 18:50:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:04.653 18:50:55 -- scripts/common.sh@394 -- # pt= 00:04:04.653 18:50:55 -- scripts/common.sh@395 -- # return 1 00:04:04.653 18:50:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:04.653 1+0 records in 00:04:04.653 1+0 records out 00:04:04.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464983 s, 226 MB/s 00:04:04.653 18:50:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.653 18:50:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.653 18:50:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:04.653 18:50:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:04.653 18:50:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:04.653 No valid GPT data, bailing 00:04:04.653 18:50:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:04.653 18:50:55 -- scripts/common.sh@394 -- # pt= 00:04:04.653 18:50:55 -- scripts/common.sh@395 -- # return 1 00:04:04.653 18:50:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:04.653 1+0 records in 00:04:04.653 1+0 records out 00:04:04.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425489 s, 246 MB/s 00:04:04.653 18:50:55 -- spdk/autotest.sh@105 -- # sync 00:04:04.653 18:50:55 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:04.653 18:50:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:04.653 18:50:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:06.028 18:50:57 -- spdk/autotest.sh@111 -- # uname -s 00:04:06.028 18:50:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:06.028 18:50:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:06.028 18:50:57 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:06.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.597 Hugepages 00:04:06.597 node hugesize free / total 00:04:06.597 node0 1048576kB 0 / 0 00:04:06.597 node0 2048kB 0 / 0 00:04:06.597 00:04:06.597 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:06.856 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:06.856 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:06.856 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:06.856 18:50:58 -- spdk/autotest.sh@117 -- # uname -s 00:04:06.856 18:50:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:06.856 18:50:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:06.856 18:50:58 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.790 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.790 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.790 18:50:59 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:08.723 18:51:00 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:08.723 18:51:00 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:08.723 18:51:00 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:08.723 18:51:00 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:08.723 18:51:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.723 18:51:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.723 18:51:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.723 18:51:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:08.723 18:51:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:08.723 18:51:00 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:08.723 18:51:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:08.723 18:51:00 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.325 Waiting for block devices as requested 00:04:09.325 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:09.325 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:09.325 18:51:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:09.325 18:51:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:09.584 18:51:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:09.584 18:51:00 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:09.584 18:51:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.584 18:51:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:09.584 18:51:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.584 18:51:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:09.584 18:51:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:09.584 18:51:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:09.584 18:51:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:09.584 18:51:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:09.584 18:51:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:09.584 18:51:00 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:09.584 18:51:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:09.584 18:51:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:09.584 18:51:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:09.584 18:51:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:09.584 18:51:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:09.584 18:51:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:09.584 18:51:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:09.584 18:51:00 -- common/autotest_common.sh@1543 -- # continue 00:04:09.584 18:51:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:09.584 18:51:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:09.584 18:51:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:09.584 18:51:00 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:09.584 18:51:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.584 18:51:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:09.584 18:51:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.584 18:51:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:09.584 18:51:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:09.584 18:51:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:09.584 18:51:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:09.584 18:51:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:09.584 18:51:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:09.584 18:51:00 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:09.584 18:51:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:09.584 18:51:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:09.584 18:51:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:09.584 18:51:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:09.584 18:51:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:09.584 18:51:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:09.584 18:51:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:09.584 18:51:00 -- common/autotest_common.sh@1543 -- # continue 00:04:09.584 18:51:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:09.584 18:51:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.584 18:51:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.584 18:51:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:09.584 18:51:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.584 18:51:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.584 18:51:00 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.408 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.408 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.408 18:51:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:10.408 18:51:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.408 18:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:10.408 18:51:01 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:10.408 18:51:01 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:10.408 18:51:01 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:10.408 18:51:01 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:10.408 18:51:01 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:10.408 18:51:01 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:10.408 18:51:01 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:10.408 18:51:01 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:10.408 18:51:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:10.408 18:51:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:10.408 18:51:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.408 18:51:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.408 18:51:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:10.665 18:51:01 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:10.665 18:51:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:10.665 18:51:01 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:10.665 18:51:01 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:10.665 18:51:01 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:10.665 18:51:01 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.665 18:51:01 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:10.665 18:51:01 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:10.665 18:51:01 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:10.665 18:51:01 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.665 18:51:01 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:10.665 18:51:01 -- common/autotest_common.sh@1572 -- # return 0 00:04:10.665 18:51:01 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:10.665 18:51:01 -- common/autotest_common.sh@1580 -- # return 0 00:04:10.665 18:51:01 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:10.665 18:51:01 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:10.665 18:51:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.665 18:51:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.665 18:51:01 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:10.665 18:51:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.665 18:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:10.665 18:51:01 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:10.665 18:51:01 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.665 18:51:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.665 18:51:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.665 18:51:01 -- common/autotest_common.sh@10 -- # set +x 00:04:10.665 ************************************ 00:04:10.665 START TEST env 00:04:10.665 ************************************ 00:04:10.665 18:51:01 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.665 * Looking for test storage... 00:04:10.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:10.665 18:51:01 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:10.665 18:51:01 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:10.665 18:51:01 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:10.665 18:51:01 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:10.665 18:51:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.665 18:51:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.665 18:51:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.665 18:51:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.665 18:51:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.665 18:51:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.665 18:51:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.665 18:51:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.665 18:51:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.665 18:51:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.665 18:51:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.665 18:51:01 env -- scripts/common.sh@344 -- # case "$op" in 00:04:10.665 18:51:01 env -- scripts/common.sh@345 -- # : 1 00:04:10.665 18:51:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.665 18:51:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.665 18:51:01 env -- scripts/common.sh@365 -- # decimal 1 00:04:10.665 18:51:01 env -- scripts/common.sh@353 -- # local d=1 00:04:10.665 18:51:02 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.665 18:51:02 env -- scripts/common.sh@355 -- # echo 1 00:04:10.665 18:51:02 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.665 18:51:02 env -- scripts/common.sh@366 -- # decimal 2 00:04:10.665 18:51:02 env -- scripts/common.sh@353 -- # local d=2 00:04:10.665 18:51:02 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.665 18:51:02 env -- scripts/common.sh@355 -- # echo 2 00:04:10.665 18:51:02 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.665 18:51:02 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.665 18:51:02 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.665 18:51:02 env -- scripts/common.sh@368 -- # return 0 00:04:10.665 18:51:02 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.665 18:51:02 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:10.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.665 --rc genhtml_branch_coverage=1 00:04:10.665 --rc genhtml_function_coverage=1 00:04:10.665 --rc genhtml_legend=1 00:04:10.665 --rc geninfo_all_blocks=1 00:04:10.665 --rc geninfo_unexecuted_blocks=1 00:04:10.665 00:04:10.665 ' 00:04:10.665 18:51:02 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:10.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.665 --rc genhtml_branch_coverage=1 00:04:10.665 --rc genhtml_function_coverage=1 00:04:10.665 --rc genhtml_legend=1 00:04:10.665 --rc geninfo_all_blocks=1 00:04:10.666 --rc geninfo_unexecuted_blocks=1 00:04:10.666 00:04:10.666 ' 00:04:10.666 18:51:02 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:10.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.666 --rc genhtml_branch_coverage=1 00:04:10.666 --rc genhtml_function_coverage=1 00:04:10.666 --rc genhtml_legend=1 00:04:10.666 --rc geninfo_all_blocks=1 00:04:10.666 --rc geninfo_unexecuted_blocks=1 00:04:10.666 00:04:10.666 ' 00:04:10.666 18:51:02 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:10.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.666 --rc genhtml_branch_coverage=1 00:04:10.666 --rc genhtml_function_coverage=1 00:04:10.666 --rc genhtml_legend=1 00:04:10.666 --rc geninfo_all_blocks=1 00:04:10.666 --rc geninfo_unexecuted_blocks=1 00:04:10.666 00:04:10.666 ' 00:04:10.666 18:51:02 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:10.666 18:51:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.666 18:51:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.666 18:51:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.666 ************************************ 00:04:10.666 START TEST env_memory 00:04:10.666 ************************************ 00:04:10.666 18:51:02 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:10.924 00:04:10.924 00:04:10.924 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.924 http://cunit.sourceforge.net/ 00:04:10.924 00:04:10.924 00:04:10.924 Suite: memory 00:04:10.924 Test: alloc and free memory map ...[2024-11-26 18:51:02.095415] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:10.924 passed 00:04:10.924 Test: mem map translation ...[2024-11-26 18:51:02.156580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:10.924 [2024-11-26 18:51:02.156729] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:10.924 [2024-11-26 18:51:02.156850] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:10.924 [2024-11-26 18:51:02.156885] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:10.924 passed 00:04:10.924 Test: mem map registration ...[2024-11-26 18:51:02.254532] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:10.924 [2024-11-26 18:51:02.254671] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:10.924 passed 00:04:11.183 Test: mem map adjacent registrations ...passed 00:04:11.183 00:04:11.183 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.183 suites 1 1 n/a 0 0 00:04:11.183 tests 4 4 4 0 0 00:04:11.183 asserts 152 152 152 0 n/a 00:04:11.183 00:04:11.183 Elapsed time = 0.316 seconds 00:04:11.183 00:04:11.183 real 0m0.360s 00:04:11.183 user 0m0.326s 00:04:11.183 sys 0m0.025s 00:04:11.183 18:51:02 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.183 ************************************ 00:04:11.183 18:51:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:11.183 END TEST env_memory 00:04:11.183 ************************************ 00:04:11.183 18:51:02 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:11.183 18:51:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.183 18:51:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.183 18:51:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.183 ************************************ 00:04:11.183 START TEST env_vtophys 00:04:11.183 ************************************ 00:04:11.183 18:51:02 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:11.183 EAL: lib.eal log level changed from notice to debug 00:04:11.183 EAL: Detected lcore 0 as core 0 on socket 0 00:04:11.183 EAL: Detected lcore 1 as core 0 on socket 0 00:04:11.183 EAL: Detected lcore 2 as core 0 on socket 0 00:04:11.183 EAL: Detected lcore 3 as core 0 on socket 0 00:04:11.183 EAL: Detected lcore 4 as core 0 on socket 0 00:04:11.183 EAL: Detected lcore 5 as core 0 on socket 0 00:04:11.183 EAL: Detected lcore 6 as core 0 on socket 0 00:04:11.183 EAL: Detected lcore 7 as core 0 on socket 0 00:04:11.183 EAL: Detected lcore 8 as core 0 on socket 0 00:04:11.183 EAL: Detected lcore 9 as core 0 on socket 0 00:04:11.183 EAL: Maximum logical cores by configuration: 128 00:04:11.183 EAL: Detected CPU lcores: 10 00:04:11.183 EAL: Detected NUMA nodes: 1 00:04:11.183 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:11.183 EAL: Detected shared linkage of DPDK 00:04:11.183 EAL: No shared files mode enabled, IPC will be disabled 00:04:11.183 EAL: Selected IOVA mode 'PA' 00:04:11.183 EAL: Probing VFIO support... 00:04:11.183 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:11.183 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:11.183 EAL: Ask a virtual area of 0x2e000 bytes 00:04:11.183 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:11.183 EAL: Setting up physically contiguous memory... 00:04:11.183 EAL: Setting maximum number of open files to 524288 00:04:11.183 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:11.183 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:11.183 EAL: Hugepages will be freed exactly as allocated. 00:04:11.183 EAL: No shared files mode enabled, IPC is disabled 00:04:11.183 EAL: No shared files mode enabled, IPC is disabled 00:04:11.442 EAL: TSC frequency is ~2200000 KHz 00:04:11.442 EAL: Main lcore 0 is ready (tid=7f4d2e0c0a40;cpuset=[0]) 00:04:11.442 EAL: Trying to obtain current memory policy. 00:04:11.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.442 EAL: Restoring previous memory policy: 0 00:04:11.442 EAL: request: mp_malloc_sync 00:04:11.442 EAL: No shared files mode enabled, IPC is disabled 00:04:11.442 EAL: Heap on socket 0 was expanded by 2MB 00:04:11.442 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:11.442 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:11.442 EAL: Mem event callback 'spdk:(nil)' registered 00:04:11.442 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:11.442 00:04:11.442 00:04:11.442 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.442 http://cunit.sourceforge.net/ 00:04:11.442 00:04:11.442 00:04:11.442 Suite: components_suite 00:04:12.010 Test: vtophys_malloc_test ...passed 00:04:12.010 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:12.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.010 EAL: Restoring previous memory policy: 4 00:04:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.010 EAL: request: mp_malloc_sync 00:04:12.010 EAL: No shared files mode enabled, IPC is disabled 00:04:12.010 EAL: Heap on socket 0 was expanded by 4MB 00:04:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.010 EAL: request: mp_malloc_sync 00:04:12.010 EAL: No shared files mode enabled, IPC is disabled 00:04:12.010 EAL: Heap on socket 0 was shrunk by 4MB 00:04:12.010 EAL: Trying to obtain current memory policy. 00:04:12.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.010 EAL: Restoring previous memory policy: 4 00:04:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.010 EAL: request: mp_malloc_sync 00:04:12.010 EAL: No shared files mode enabled, IPC is disabled 00:04:12.010 EAL: Heap on socket 0 was expanded by 6MB 00:04:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.010 EAL: request: mp_malloc_sync 00:04:12.010 EAL: No shared files mode enabled, IPC is disabled 00:04:12.010 EAL: Heap on socket 0 was shrunk by 6MB 00:04:12.010 EAL: Trying to obtain current memory policy. 00:04:12.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.010 EAL: Restoring previous memory policy: 4 00:04:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.010 EAL: request: mp_malloc_sync 00:04:12.010 EAL: No shared files mode enabled, IPC is disabled 00:04:12.010 EAL: Heap on socket 0 was expanded by 10MB 00:04:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.010 EAL: request: mp_malloc_sync 00:04:12.010 EAL: No shared files mode enabled, IPC is disabled 00:04:12.010 EAL: Heap on socket 0 was shrunk by 10MB 00:04:12.010 EAL: Trying to obtain current memory policy. 00:04:12.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.010 EAL: Restoring previous memory policy: 4 00:04:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.010 EAL: request: mp_malloc_sync 00:04:12.010 EAL: No shared files mode enabled, IPC is disabled 00:04:12.010 EAL: Heap on socket 0 was expanded by 18MB 00:04:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.010 EAL: request: mp_malloc_sync 00:04:12.010 EAL: No shared files mode enabled, IPC is disabled 00:04:12.010 EAL: Heap on socket 0 was shrunk by 18MB 00:04:12.010 EAL: Trying to obtain current memory policy. 00:04:12.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.010 EAL: Restoring previous memory policy: 4 00:04:12.010 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.010 EAL: request: mp_malloc_sync 00:04:12.010 EAL: No shared files mode enabled, IPC is disabled 00:04:12.010 EAL: Heap on socket 0 was expanded by 34MB 00:04:12.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.268 EAL: request: mp_malloc_sync 00:04:12.268 EAL: No shared files mode enabled, IPC is disabled 00:04:12.268 EAL: Heap on socket 0 was shrunk by 34MB 00:04:12.268 EAL: Trying to obtain current memory policy. 00:04:12.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.268 EAL: Restoring previous memory policy: 4 00:04:12.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.268 EAL: request: mp_malloc_sync 00:04:12.268 EAL: No shared files mode enabled, IPC is disabled 00:04:12.268 EAL: Heap on socket 0 was expanded by 66MB 00:04:12.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.268 EAL: request: mp_malloc_sync 00:04:12.268 EAL: No shared files mode enabled, IPC is disabled 00:04:12.268 EAL: Heap on socket 0 was shrunk by 66MB 00:04:12.526 EAL: Trying to obtain current memory policy. 00:04:12.526 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.526 EAL: Restoring previous memory policy: 4 00:04:12.526 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.526 EAL: request: mp_malloc_sync 00:04:12.526 EAL: No shared files mode enabled, IPC is disabled 00:04:12.526 EAL: Heap on socket 0 was expanded by 130MB 00:04:12.785 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.785 EAL: request: mp_malloc_sync 00:04:12.785 EAL: No shared files mode enabled, IPC is disabled 00:04:12.785 EAL: Heap on socket 0 was shrunk by 130MB 00:04:13.042 EAL: Trying to obtain current memory policy. 00:04:13.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.042 EAL: Restoring previous memory policy: 4 00:04:13.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.042 EAL: request: mp_malloc_sync 00:04:13.042 EAL: No shared files mode enabled, IPC is disabled 00:04:13.042 EAL: Heap on socket 0 was expanded by 258MB 00:04:13.609 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.609 EAL: request: mp_malloc_sync 00:04:13.609 EAL: No shared files mode enabled, IPC is disabled 00:04:13.609 EAL: Heap on socket 0 was shrunk by 258MB 00:04:13.868 EAL: Trying to obtain current memory policy. 00:04:13.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.126 EAL: Restoring previous memory policy: 4 00:04:14.126 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.126 EAL: request: mp_malloc_sync 00:04:14.126 EAL: No shared files mode enabled, IPC is disabled 00:04:14.126 EAL: Heap on socket 0 was expanded by 514MB 00:04:15.062 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.062 EAL: request: mp_malloc_sync 00:04:15.062 EAL: No shared files mode enabled, IPC is disabled 00:04:15.062 EAL: Heap on socket 0 was shrunk by 514MB 00:04:15.629 EAL: Trying to obtain current memory policy. 00:04:15.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.195 EAL: Restoring previous memory policy: 4 00:04:16.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.195 EAL: request: mp_malloc_sync 00:04:16.195 EAL: No shared files mode enabled, IPC is disabled 00:04:16.195 EAL: Heap on socket 0 was expanded by 1026MB 00:04:17.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.852 EAL: request: mp_malloc_sync 00:04:17.852 EAL: No shared files mode enabled, IPC is disabled 00:04:17.852 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:19.756 passed 00:04:19.756 00:04:19.756 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.756 suites 1 1 n/a 0 0 00:04:19.756 tests 2 2 2 0 0 00:04:19.756 asserts 5481 5481 5481 0 n/a 00:04:19.756 00:04:19.756 Elapsed time = 7.885 seconds 00:04:19.756 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.756 EAL: request: mp_malloc_sync 00:04:19.756 EAL: No shared files mode enabled, IPC is disabled 00:04:19.756 EAL: Heap on socket 0 was shrunk by 2MB 00:04:19.756 EAL: No shared files mode enabled, IPC is disabled 00:04:19.756 EAL: No shared files mode enabled, IPC is disabled 00:04:19.756 EAL: No shared files mode enabled, IPC is disabled 00:04:19.756 00:04:19.756 real 0m8.250s 00:04:19.756 user 0m6.898s 00:04:19.756 sys 0m1.179s 00:04:19.756 18:51:10 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.756 18:51:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:19.756 ************************************ 00:04:19.756 END TEST env_vtophys 00:04:19.756 ************************************ 00:04:19.756 18:51:10 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.756 18:51:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.756 18:51:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.756 18:51:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.756 ************************************ 00:04:19.756 START TEST env_pci 00:04:19.756 ************************************ 00:04:19.756 18:51:10 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.756 00:04:19.756 00:04:19.756 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.756 http://cunit.sourceforge.net/ 00:04:19.756 00:04:19.756 00:04:19.756 Suite: pci 00:04:19.756 Test: pci_hook ...[2024-11-26 18:51:10.766389] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56692 has claimed it 00:04:19.756 EAL: Cannot find device (10000:00:01.0) 00:04:19.756 EAL: Failed to attach device on primary process 00:04:19.756 passed 00:04:19.756 00:04:19.756 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.756 suites 1 1 n/a 0 0 00:04:19.756 tests 1 1 1 0 0 00:04:19.756 asserts 25 25 25 0 n/a 00:04:19.756 00:04:19.756 Elapsed time = 0.007 seconds 00:04:19.756 00:04:19.757 real 0m0.074s 00:04:19.757 user 0m0.035s 00:04:19.757 sys 0m0.037s 00:04:19.757 18:51:10 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.757 18:51:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:19.757 ************************************ 00:04:19.757 END TEST env_pci 00:04:19.757 ************************************ 00:04:19.757 18:51:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:19.757 18:51:10 env -- env/env.sh@15 -- # uname 00:04:19.757 18:51:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:19.757 18:51:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:19.757 18:51:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.757 18:51:10 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:19.757 18:51:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.757 18:51:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.757 ************************************ 00:04:19.757 START TEST env_dpdk_post_init 00:04:19.757 ************************************ 00:04:19.757 18:51:10 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.757 EAL: Detected CPU lcores: 10 00:04:19.757 EAL: Detected NUMA nodes: 1 00:04:19.757 EAL: Detected shared linkage of DPDK 00:04:19.757 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.757 EAL: Selected IOVA mode 'PA' 00:04:19.757 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.015 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:20.015 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:20.015 Starting DPDK initialization... 00:04:20.015 Starting SPDK post initialization... 00:04:20.015 SPDK NVMe probe 00:04:20.015 Attaching to 0000:00:10.0 00:04:20.015 Attaching to 0000:00:11.0 00:04:20.015 Attached to 0000:00:10.0 00:04:20.015 Attached to 0000:00:11.0 00:04:20.015 Cleaning up... 00:04:20.015 00:04:20.015 real 0m0.328s 00:04:20.015 user 0m0.112s 00:04:20.015 sys 0m0.114s 00:04:20.015 18:51:11 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.015 18:51:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.016 ************************************ 00:04:20.016 END TEST env_dpdk_post_init 00:04:20.016 ************************************ 00:04:20.016 18:51:11 env -- env/env.sh@26 -- # uname 00:04:20.016 18:51:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:20.016 18:51:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.016 18:51:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.016 18:51:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.016 18:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.016 ************************************ 00:04:20.016 START TEST env_mem_callbacks 00:04:20.016 ************************************ 00:04:20.016 18:51:11 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.016 EAL: Detected CPU lcores: 10 00:04:20.016 EAL: Detected NUMA nodes: 1 00:04:20.016 EAL: Detected shared linkage of DPDK 00:04:20.016 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.016 EAL: Selected IOVA mode 'PA' 00:04:20.275 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.275 00:04:20.275 00:04:20.275 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.275 http://cunit.sourceforge.net/ 00:04:20.275 00:04:20.275 00:04:20.275 Suite: memory 00:04:20.275 Test: test ... 00:04:20.275 register 0x200000200000 2097152 00:04:20.275 malloc 3145728 00:04:20.275 register 0x200000400000 4194304 00:04:20.275 buf 0x2000004fffc0 len 3145728 PASSED 00:04:20.275 malloc 64 00:04:20.275 buf 0x2000004ffec0 len 64 PASSED 00:04:20.275 malloc 4194304 00:04:20.275 register 0x200000800000 6291456 00:04:20.275 buf 0x2000009fffc0 len 4194304 PASSED 00:04:20.275 free 0x2000004fffc0 3145728 00:04:20.275 free 0x2000004ffec0 64 00:04:20.275 unregister 0x200000400000 4194304 PASSED 00:04:20.275 free 0x2000009fffc0 4194304 00:04:20.275 unregister 0x200000800000 6291456 PASSED 00:04:20.275 malloc 8388608 00:04:20.275 register 0x200000400000 10485760 00:04:20.275 buf 0x2000005fffc0 len 8388608 PASSED 00:04:20.275 free 0x2000005fffc0 8388608 00:04:20.275 unregister 0x200000400000 10485760 PASSED 00:04:20.275 passed 00:04:20.275 00:04:20.275 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.275 suites 1 1 n/a 0 0 00:04:20.275 tests 1 1 1 0 0 00:04:20.275 asserts 15 15 15 0 n/a 00:04:20.275 00:04:20.275 Elapsed time = 0.063 seconds 00:04:20.275 00:04:20.275 real 0m0.307s 00:04:20.275 user 0m0.113s 00:04:20.275 sys 0m0.091s 00:04:20.275 18:51:11 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.275 18:51:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:20.275 ************************************ 00:04:20.275 END TEST env_mem_callbacks 00:04:20.275 ************************************ 00:04:20.275 00:04:20.275 real 0m9.776s 00:04:20.275 user 0m7.685s 00:04:20.275 sys 0m1.694s 00:04:20.275 18:51:11 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.275 18:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.275 ************************************ 00:04:20.275 END TEST env 00:04:20.275 ************************************ 00:04:20.275 18:51:11 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:20.275 18:51:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.275 18:51:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.275 18:51:11 -- common/autotest_common.sh@10 -- # set +x 00:04:20.534 ************************************ 00:04:20.534 START TEST rpc 00:04:20.534 ************************************ 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:20.534 * Looking for test storage... 00:04:20.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.534 18:51:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.534 18:51:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.534 18:51:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.534 18:51:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.534 18:51:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.534 18:51:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.534 18:51:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.534 18:51:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.534 18:51:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.534 18:51:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.534 18:51:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.534 18:51:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:20.534 18:51:11 rpc -- scripts/common.sh@345 -- # : 1 00:04:20.534 18:51:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.534 18:51:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.534 18:51:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:20.534 18:51:11 rpc -- scripts/common.sh@353 -- # local d=1 00:04:20.534 18:51:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.534 18:51:11 rpc -- scripts/common.sh@355 -- # echo 1 00:04:20.534 18:51:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.534 18:51:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:20.534 18:51:11 rpc -- scripts/common.sh@353 -- # local d=2 00:04:20.534 18:51:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.534 18:51:11 rpc -- scripts/common.sh@355 -- # echo 2 00:04:20.534 18:51:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.534 18:51:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.534 18:51:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.534 18:51:11 rpc -- scripts/common.sh@368 -- # return 0 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.534 --rc genhtml_branch_coverage=1 00:04:20.534 --rc genhtml_function_coverage=1 00:04:20.534 --rc genhtml_legend=1 00:04:20.534 --rc geninfo_all_blocks=1 00:04:20.534 --rc geninfo_unexecuted_blocks=1 00:04:20.534 00:04:20.534 ' 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.534 --rc genhtml_branch_coverage=1 00:04:20.534 --rc genhtml_function_coverage=1 00:04:20.534 --rc genhtml_legend=1 00:04:20.534 --rc geninfo_all_blocks=1 00:04:20.534 --rc geninfo_unexecuted_blocks=1 00:04:20.534 00:04:20.534 ' 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.534 --rc genhtml_branch_coverage=1 00:04:20.534 --rc genhtml_function_coverage=1 00:04:20.534 --rc genhtml_legend=1 00:04:20.534 --rc geninfo_all_blocks=1 00:04:20.534 --rc geninfo_unexecuted_blocks=1 00:04:20.534 00:04:20.534 ' 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.534 --rc genhtml_branch_coverage=1 00:04:20.534 --rc genhtml_function_coverage=1 00:04:20.534 --rc genhtml_legend=1 00:04:20.534 --rc geninfo_all_blocks=1 00:04:20.534 --rc geninfo_unexecuted_blocks=1 00:04:20.534 00:04:20.534 ' 00:04:20.534 18:51:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56825 00:04:20.534 18:51:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:20.534 18:51:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.534 18:51:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56825 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 56825 ']' 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.534 18:51:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.793 [2024-11-26 18:51:11.971457] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:04:20.793 [2024-11-26 18:51:11.971686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56825 ] 00:04:21.058 [2024-11-26 18:51:12.162839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.058 [2024-11-26 18:51:12.301853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:21.058 [2024-11-26 18:51:12.301969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56825' to capture a snapshot of events at runtime. 00:04:21.058 [2024-11-26 18:51:12.301987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:21.058 [2024-11-26 18:51:12.302001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:21.058 [2024-11-26 18:51:12.302012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56825 for offline analysis/debug. 00:04:21.058 [2024-11-26 18:51:12.303471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.007 18:51:13 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.007 18:51:13 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.007 18:51:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:22.007 18:51:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:22.007 18:51:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:22.007 18:51:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:22.007 18:51:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.007 18:51:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.007 18:51:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.007 ************************************ 00:04:22.007 START TEST rpc_integrity 00:04:22.007 ************************************ 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:22.007 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.007 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.007 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:22.007 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:22.007 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.007 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:22.007 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.007 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.007 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:22.007 { 00:04:22.007 "name": "Malloc0", 00:04:22.007 "aliases": [ 00:04:22.007 "2a35c0d5-588a-460c-b97e-16318c953954" 00:04:22.007 ], 00:04:22.007 "product_name": "Malloc disk", 00:04:22.007 "block_size": 512, 00:04:22.007 "num_blocks": 16384, 00:04:22.007 "uuid": "2a35c0d5-588a-460c-b97e-16318c953954", 00:04:22.007 "assigned_rate_limits": { 00:04:22.007 "rw_ios_per_sec": 0, 00:04:22.007 "rw_mbytes_per_sec": 0, 00:04:22.007 "r_mbytes_per_sec": 0, 00:04:22.007 "w_mbytes_per_sec": 0 00:04:22.007 }, 00:04:22.007 "claimed": false, 00:04:22.007 "zoned": false, 00:04:22.007 "supported_io_types": { 00:04:22.007 "read": true, 00:04:22.007 "write": true, 00:04:22.007 "unmap": true, 00:04:22.007 "flush": true, 00:04:22.007 "reset": true, 00:04:22.007 "nvme_admin": false, 00:04:22.007 "nvme_io": false, 00:04:22.007 "nvme_io_md": false, 00:04:22.007 "write_zeroes": true, 00:04:22.007 "zcopy": true, 00:04:22.007 "get_zone_info": false, 00:04:22.007 "zone_management": false, 00:04:22.007 "zone_append": false, 00:04:22.007 "compare": false, 00:04:22.007 "compare_and_write": false, 00:04:22.007 "abort": true, 00:04:22.007 "seek_hole": false, 00:04:22.007 "seek_data": false, 00:04:22.007 "copy": true, 00:04:22.007 "nvme_iov_md": false 00:04:22.007 }, 00:04:22.007 "memory_domains": [ 00:04:22.007 { 00:04:22.007 "dma_device_id": "system", 00:04:22.007 "dma_device_type": 1 00:04:22.007 }, 00:04:22.007 { 00:04:22.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.007 "dma_device_type": 2 00:04:22.007 } 00:04:22.007 ], 00:04:22.007 "driver_specific": {} 00:04:22.007 } 00:04:22.007 ]' 00:04:22.007 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.266 [2024-11-26 18:51:13.385470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:22.266 [2024-11-26 18:51:13.385601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:22.266 [2024-11-26 18:51:13.385641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:22.266 [2024-11-26 18:51:13.385666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:22.266 [2024-11-26 18:51:13.389318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:22.266 [2024-11-26 18:51:13.389416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:22.266 Passthru0 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:22.266 { 00:04:22.266 "name": "Malloc0", 00:04:22.266 "aliases": [ 00:04:22.266 "2a35c0d5-588a-460c-b97e-16318c953954" 00:04:22.266 ], 00:04:22.266 "product_name": "Malloc disk", 00:04:22.266 "block_size": 512, 00:04:22.266 "num_blocks": 16384, 00:04:22.266 "uuid": "2a35c0d5-588a-460c-b97e-16318c953954", 00:04:22.266 "assigned_rate_limits": { 00:04:22.266 "rw_ios_per_sec": 0, 00:04:22.266 "rw_mbytes_per_sec": 0, 00:04:22.266 "r_mbytes_per_sec": 0, 00:04:22.266 "w_mbytes_per_sec": 0 00:04:22.266 }, 00:04:22.266 "claimed": true, 00:04:22.266 "claim_type": "exclusive_write", 00:04:22.266 "zoned": false, 00:04:22.266 "supported_io_types": { 00:04:22.266 "read": true, 00:04:22.266 "write": true, 00:04:22.266 "unmap": true, 00:04:22.266 "flush": true, 00:04:22.266 "reset": true, 00:04:22.266 "nvme_admin": false, 00:04:22.266 "nvme_io": false, 00:04:22.266 "nvme_io_md": false, 00:04:22.266 "write_zeroes": true, 00:04:22.266 "zcopy": true, 00:04:22.266 "get_zone_info": false, 00:04:22.266 "zone_management": false, 00:04:22.266 "zone_append": false, 00:04:22.266 "compare": false, 00:04:22.266 "compare_and_write": false, 00:04:22.266 "abort": true, 00:04:22.266 "seek_hole": false, 00:04:22.266 "seek_data": false, 00:04:22.266 "copy": true, 00:04:22.266 "nvme_iov_md": false 00:04:22.266 }, 00:04:22.266 "memory_domains": [ 00:04:22.266 { 00:04:22.266 "dma_device_id": "system", 00:04:22.266 "dma_device_type": 1 00:04:22.266 }, 00:04:22.266 { 00:04:22.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.266 "dma_device_type": 2 00:04:22.266 } 00:04:22.266 ], 00:04:22.266 "driver_specific": {} 00:04:22.266 }, 00:04:22.266 { 00:04:22.266 "name": "Passthru0", 00:04:22.266 "aliases": [ 00:04:22.266 "d53eb874-f652-54ec-ae9c-9fe2663d73da" 00:04:22.266 ], 00:04:22.266 "product_name": "passthru", 00:04:22.266 "block_size": 512, 00:04:22.266 "num_blocks": 16384, 00:04:22.266 "uuid": "d53eb874-f652-54ec-ae9c-9fe2663d73da", 00:04:22.266 "assigned_rate_limits": { 00:04:22.266 "rw_ios_per_sec": 0, 00:04:22.266 "rw_mbytes_per_sec": 0, 00:04:22.266 "r_mbytes_per_sec": 0, 00:04:22.266 "w_mbytes_per_sec": 0 00:04:22.266 }, 00:04:22.266 "claimed": false, 00:04:22.266 "zoned": false, 00:04:22.266 "supported_io_types": { 00:04:22.266 "read": true, 00:04:22.266 "write": true, 00:04:22.266 "unmap": true, 00:04:22.266 "flush": true, 00:04:22.266 "reset": true, 00:04:22.266 "nvme_admin": false, 00:04:22.266 "nvme_io": false, 00:04:22.266 "nvme_io_md": false, 00:04:22.266 "write_zeroes": true, 00:04:22.266 "zcopy": true, 00:04:22.266 "get_zone_info": false, 00:04:22.266 "zone_management": false, 00:04:22.266 "zone_append": false, 00:04:22.266 "compare": false, 00:04:22.266 "compare_and_write": false, 00:04:22.266 "abort": true, 00:04:22.266 "seek_hole": false, 00:04:22.266 "seek_data": false, 00:04:22.266 "copy": true, 00:04:22.266 "nvme_iov_md": false 00:04:22.266 }, 00:04:22.266 "memory_domains": [ 00:04:22.266 { 00:04:22.266 "dma_device_id": "system", 00:04:22.266 "dma_device_type": 1 00:04:22.266 }, 00:04:22.266 { 00:04:22.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.266 "dma_device_type": 2 00:04:22.266 } 00:04:22.266 ], 00:04:22.266 "driver_specific": { 00:04:22.266 "passthru": { 00:04:22.266 "name": "Passthru0", 00:04:22.266 "base_bdev_name": "Malloc0" 00:04:22.266 } 00:04:22.266 } 00:04:22.266 } 00:04:22.266 ]' 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:22.266 ************************************ 00:04:22.266 END TEST rpc_integrity 00:04:22.266 ************************************ 00:04:22.266 18:51:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:22.266 00:04:22.266 real 0m0.372s 00:04:22.266 user 0m0.221s 00:04:22.266 sys 0m0.042s 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.266 18:51:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.524 18:51:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:22.524 18:51:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.524 18:51:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.524 18:51:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.524 ************************************ 00:04:22.524 START TEST rpc_plugins 00:04:22.524 ************************************ 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:22.524 { 00:04:22.524 "name": "Malloc1", 00:04:22.524 "aliases": [ 00:04:22.524 "edc28630-7dbf-438b-8dcd-524105cdb1ce" 00:04:22.524 ], 00:04:22.524 "product_name": "Malloc disk", 00:04:22.524 "block_size": 4096, 00:04:22.524 "num_blocks": 256, 00:04:22.524 "uuid": "edc28630-7dbf-438b-8dcd-524105cdb1ce", 00:04:22.524 "assigned_rate_limits": { 00:04:22.524 "rw_ios_per_sec": 0, 00:04:22.524 "rw_mbytes_per_sec": 0, 00:04:22.524 "r_mbytes_per_sec": 0, 00:04:22.524 "w_mbytes_per_sec": 0 00:04:22.524 }, 00:04:22.524 "claimed": false, 00:04:22.524 "zoned": false, 00:04:22.524 "supported_io_types": { 00:04:22.524 "read": true, 00:04:22.524 "write": true, 00:04:22.524 "unmap": true, 00:04:22.524 "flush": true, 00:04:22.524 "reset": true, 00:04:22.524 "nvme_admin": false, 00:04:22.524 "nvme_io": false, 00:04:22.524 "nvme_io_md": false, 00:04:22.524 "write_zeroes": true, 00:04:22.524 "zcopy": true, 00:04:22.524 "get_zone_info": false, 00:04:22.524 "zone_management": false, 00:04:22.524 "zone_append": false, 00:04:22.524 "compare": false, 00:04:22.524 "compare_and_write": false, 00:04:22.524 "abort": true, 00:04:22.524 "seek_hole": false, 00:04:22.524 "seek_data": false, 00:04:22.524 "copy": true, 00:04:22.524 "nvme_iov_md": false 00:04:22.524 }, 00:04:22.524 "memory_domains": [ 00:04:22.524 { 00:04:22.524 "dma_device_id": "system", 00:04:22.524 "dma_device_type": 1 00:04:22.524 }, 00:04:22.524 { 00:04:22.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.524 "dma_device_type": 2 00:04:22.524 } 00:04:22.524 ], 00:04:22.524 "driver_specific": {} 00:04:22.524 } 00:04:22.524 ]' 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:22.524 ************************************ 00:04:22.524 END TEST rpc_plugins 00:04:22.524 ************************************ 00:04:22.524 18:51:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:22.524 00:04:22.524 real 0m0.165s 00:04:22.524 user 0m0.100s 00:04:22.524 sys 0m0.023s 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.524 18:51:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.524 18:51:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:22.524 18:51:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.524 18:51:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.524 18:51:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.524 ************************************ 00:04:22.524 START TEST rpc_trace_cmd_test 00:04:22.524 ************************************ 00:04:22.524 18:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:22.524 18:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:22.524 18:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:22.524 18:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.524 18:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:22.524 18:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.524 18:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:22.525 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56825", 00:04:22.525 "tpoint_group_mask": "0x8", 00:04:22.525 "iscsi_conn": { 00:04:22.525 "mask": "0x2", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "scsi": { 00:04:22.525 "mask": "0x4", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "bdev": { 00:04:22.525 "mask": "0x8", 00:04:22.525 "tpoint_mask": "0xffffffffffffffff" 00:04:22.525 }, 00:04:22.525 "nvmf_rdma": { 00:04:22.525 "mask": "0x10", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "nvmf_tcp": { 00:04:22.525 "mask": "0x20", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "ftl": { 00:04:22.525 "mask": "0x40", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "blobfs": { 00:04:22.525 "mask": "0x80", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "dsa": { 00:04:22.525 "mask": "0x200", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "thread": { 00:04:22.525 "mask": "0x400", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "nvme_pcie": { 00:04:22.525 "mask": "0x800", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "iaa": { 00:04:22.525 "mask": "0x1000", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "nvme_tcp": { 00:04:22.525 "mask": "0x2000", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "bdev_nvme": { 00:04:22.525 "mask": "0x4000", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "sock": { 00:04:22.525 "mask": "0x8000", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "blob": { 00:04:22.525 "mask": "0x10000", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "bdev_raid": { 00:04:22.525 "mask": "0x20000", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 }, 00:04:22.525 "scheduler": { 00:04:22.525 "mask": "0x40000", 00:04:22.525 "tpoint_mask": "0x0" 00:04:22.525 } 00:04:22.525 }' 00:04:22.783 18:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:22.783 18:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:22.783 18:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:22.783 18:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:22.783 18:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:22.783 18:51:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:22.783 18:51:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:22.783 18:51:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:22.783 18:51:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:23.042 ************************************ 00:04:23.042 END TEST rpc_trace_cmd_test 00:04:23.042 ************************************ 00:04:23.042 18:51:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:23.042 00:04:23.042 real 0m0.290s 00:04:23.042 user 0m0.236s 00:04:23.042 sys 0m0.042s 00:04:23.042 18:51:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.042 18:51:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.042 18:51:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:23.042 18:51:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:23.042 18:51:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:23.042 18:51:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.042 18:51:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.042 18:51:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.042 ************************************ 00:04:23.042 START TEST rpc_daemon_integrity 00:04:23.042 ************************************ 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.042 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.042 { 00:04:23.042 "name": "Malloc2", 00:04:23.042 "aliases": [ 00:04:23.042 "210b8d4d-dd0b-478f-a98e-6be3dad8c1ce" 00:04:23.042 ], 00:04:23.042 "product_name": "Malloc disk", 00:04:23.042 "block_size": 512, 00:04:23.042 "num_blocks": 16384, 00:04:23.042 "uuid": "210b8d4d-dd0b-478f-a98e-6be3dad8c1ce", 00:04:23.042 "assigned_rate_limits": { 00:04:23.042 "rw_ios_per_sec": 0, 00:04:23.042 "rw_mbytes_per_sec": 0, 00:04:23.042 "r_mbytes_per_sec": 0, 00:04:23.042 "w_mbytes_per_sec": 0 00:04:23.042 }, 00:04:23.042 "claimed": false, 00:04:23.042 "zoned": false, 00:04:23.042 "supported_io_types": { 00:04:23.042 "read": true, 00:04:23.042 "write": true, 00:04:23.042 "unmap": true, 00:04:23.042 "flush": true, 00:04:23.042 "reset": true, 00:04:23.042 "nvme_admin": false, 00:04:23.042 "nvme_io": false, 00:04:23.042 "nvme_io_md": false, 00:04:23.042 "write_zeroes": true, 00:04:23.042 "zcopy": true, 00:04:23.042 "get_zone_info": false, 00:04:23.042 "zone_management": false, 00:04:23.043 "zone_append": false, 00:04:23.043 "compare": false, 00:04:23.043 "compare_and_write": false, 00:04:23.043 "abort": true, 00:04:23.043 "seek_hole": false, 00:04:23.043 "seek_data": false, 00:04:23.043 "copy": true, 00:04:23.043 "nvme_iov_md": false 00:04:23.043 }, 00:04:23.043 "memory_domains": [ 00:04:23.043 { 00:04:23.043 "dma_device_id": "system", 00:04:23.043 "dma_device_type": 1 00:04:23.043 }, 00:04:23.043 { 00:04:23.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.043 "dma_device_type": 2 00:04:23.043 } 00:04:23.043 ], 00:04:23.043 "driver_specific": {} 00:04:23.043 } 00:04:23.043 ]' 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.043 [2024-11-26 18:51:14.357472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:23.043 [2024-11-26 18:51:14.357784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.043 [2024-11-26 18:51:14.357831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:23.043 [2024-11-26 18:51:14.357852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.043 [2024-11-26 18:51:14.361196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.043 [2024-11-26 18:51:14.361262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.043 Passthru0 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.043 { 00:04:23.043 "name": "Malloc2", 00:04:23.043 "aliases": [ 00:04:23.043 "210b8d4d-dd0b-478f-a98e-6be3dad8c1ce" 00:04:23.043 ], 00:04:23.043 "product_name": "Malloc disk", 00:04:23.043 "block_size": 512, 00:04:23.043 "num_blocks": 16384, 00:04:23.043 "uuid": "210b8d4d-dd0b-478f-a98e-6be3dad8c1ce", 00:04:23.043 "assigned_rate_limits": { 00:04:23.043 "rw_ios_per_sec": 0, 00:04:23.043 "rw_mbytes_per_sec": 0, 00:04:23.043 "r_mbytes_per_sec": 0, 00:04:23.043 "w_mbytes_per_sec": 0 00:04:23.043 }, 00:04:23.043 "claimed": true, 00:04:23.043 "claim_type": "exclusive_write", 00:04:23.043 "zoned": false, 00:04:23.043 "supported_io_types": { 00:04:23.043 "read": true, 00:04:23.043 "write": true, 00:04:23.043 "unmap": true, 00:04:23.043 "flush": true, 00:04:23.043 "reset": true, 00:04:23.043 "nvme_admin": false, 00:04:23.043 "nvme_io": false, 00:04:23.043 "nvme_io_md": false, 00:04:23.043 "write_zeroes": true, 00:04:23.043 "zcopy": true, 00:04:23.043 "get_zone_info": false, 00:04:23.043 "zone_management": false, 00:04:23.043 "zone_append": false, 00:04:23.043 "compare": false, 00:04:23.043 "compare_and_write": false, 00:04:23.043 "abort": true, 00:04:23.043 "seek_hole": false, 00:04:23.043 "seek_data": false, 00:04:23.043 "copy": true, 00:04:23.043 "nvme_iov_md": false 00:04:23.043 }, 00:04:23.043 "memory_domains": [ 00:04:23.043 { 00:04:23.043 "dma_device_id": "system", 00:04:23.043 "dma_device_type": 1 00:04:23.043 }, 00:04:23.043 { 00:04:23.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.043 "dma_device_type": 2 00:04:23.043 } 00:04:23.043 ], 00:04:23.043 "driver_specific": {} 00:04:23.043 }, 00:04:23.043 { 00:04:23.043 "name": "Passthru0", 00:04:23.043 "aliases": [ 00:04:23.043 "a3da8624-3057-540a-95e3-27c502a0cfad" 00:04:23.043 ], 00:04:23.043 "product_name": "passthru", 00:04:23.043 "block_size": 512, 00:04:23.043 "num_blocks": 16384, 00:04:23.043 "uuid": "a3da8624-3057-540a-95e3-27c502a0cfad", 00:04:23.043 "assigned_rate_limits": { 00:04:23.043 "rw_ios_per_sec": 0, 00:04:23.043 "rw_mbytes_per_sec": 0, 00:04:23.043 "r_mbytes_per_sec": 0, 00:04:23.043 "w_mbytes_per_sec": 0 00:04:23.043 }, 00:04:23.043 "claimed": false, 00:04:23.043 "zoned": false, 00:04:23.043 "supported_io_types": { 00:04:23.043 "read": true, 00:04:23.043 "write": true, 00:04:23.043 "unmap": true, 00:04:23.043 "flush": true, 00:04:23.043 "reset": true, 00:04:23.043 "nvme_admin": false, 00:04:23.043 "nvme_io": false, 00:04:23.043 "nvme_io_md": false, 00:04:23.043 "write_zeroes": true, 00:04:23.043 "zcopy": true, 00:04:23.043 "get_zone_info": false, 00:04:23.043 "zone_management": false, 00:04:23.043 "zone_append": false, 00:04:23.043 "compare": false, 00:04:23.043 "compare_and_write": false, 00:04:23.043 "abort": true, 00:04:23.043 "seek_hole": false, 00:04:23.043 "seek_data": false, 00:04:23.043 "copy": true, 00:04:23.043 "nvme_iov_md": false 00:04:23.043 }, 00:04:23.043 "memory_domains": [ 00:04:23.043 { 00:04:23.043 "dma_device_id": "system", 00:04:23.043 "dma_device_type": 1 00:04:23.043 }, 00:04:23.043 { 00:04:23.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.043 "dma_device_type": 2 00:04:23.043 } 00:04:23.043 ], 00:04:23.043 "driver_specific": { 00:04:23.043 "passthru": { 00:04:23.043 "name": "Passthru0", 00:04:23.043 "base_bdev_name": "Malloc2" 00:04:23.043 } 00:04:23.043 } 00:04:23.043 } 00:04:23.043 ]' 00:04:23.043 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.302 ************************************ 00:04:23.302 END TEST rpc_daemon_integrity 00:04:23.302 ************************************ 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.302 00:04:23.302 real 0m0.354s 00:04:23.302 user 0m0.209s 00:04:23.302 sys 0m0.046s 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.302 18:51:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.302 18:51:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:23.302 18:51:14 rpc -- rpc/rpc.sh@84 -- # killprocess 56825 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@954 -- # '[' -z 56825 ']' 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@958 -- # kill -0 56825 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@959 -- # uname 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56825 00:04:23.302 killing process with pid 56825 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56825' 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@973 -- # kill 56825 00:04:23.302 18:51:14 rpc -- common/autotest_common.sh@978 -- # wait 56825 00:04:25.838 00:04:25.838 real 0m5.303s 00:04:25.838 user 0m5.969s 00:04:25.838 sys 0m0.983s 00:04:25.838 ************************************ 00:04:25.838 END TEST rpc 00:04:25.838 ************************************ 00:04:25.838 18:51:16 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.838 18:51:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.838 18:51:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.838 18:51:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.838 18:51:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.838 18:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:25.838 ************************************ 00:04:25.838 START TEST skip_rpc 00:04:25.838 ************************************ 00:04:25.838 18:51:17 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.838 * Looking for test storage... 00:04:25.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.838 18:51:17 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.838 18:51:17 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.838 18:51:17 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.838 18:51:17 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.838 18:51:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.097 18:51:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.097 18:51:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.097 18:51:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.097 18:51:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.097 18:51:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.097 18:51:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.097 18:51:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.097 18:51:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.097 18:51:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:26.097 18:51:17 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.097 18:51:17 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.097 --rc genhtml_branch_coverage=1 00:04:26.097 --rc genhtml_function_coverage=1 00:04:26.097 --rc genhtml_legend=1 00:04:26.097 --rc geninfo_all_blocks=1 00:04:26.097 --rc geninfo_unexecuted_blocks=1 00:04:26.097 00:04:26.097 ' 00:04:26.097 18:51:17 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.097 --rc genhtml_branch_coverage=1 00:04:26.097 --rc genhtml_function_coverage=1 00:04:26.097 --rc genhtml_legend=1 00:04:26.097 --rc geninfo_all_blocks=1 00:04:26.097 --rc geninfo_unexecuted_blocks=1 00:04:26.097 00:04:26.097 ' 00:04:26.097 18:51:17 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.097 --rc genhtml_branch_coverage=1 00:04:26.097 --rc genhtml_function_coverage=1 00:04:26.097 --rc genhtml_legend=1 00:04:26.097 --rc geninfo_all_blocks=1 00:04:26.097 --rc geninfo_unexecuted_blocks=1 00:04:26.097 00:04:26.097 ' 00:04:26.097 18:51:17 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.097 --rc genhtml_branch_coverage=1 00:04:26.097 --rc genhtml_function_coverage=1 00:04:26.097 --rc genhtml_legend=1 00:04:26.097 --rc geninfo_all_blocks=1 00:04:26.097 --rc geninfo_unexecuted_blocks=1 00:04:26.097 00:04:26.097 ' 00:04:26.097 18:51:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.097 18:51:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.097 18:51:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:26.097 18:51:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.097 18:51:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.097 18:51:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.097 ************************************ 00:04:26.097 START TEST skip_rpc 00:04:26.097 ************************************ 00:04:26.097 18:51:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:26.098 18:51:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57054 00:04:26.098 18:51:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:26.098 18:51:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.098 18:51:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:26.098 [2024-11-26 18:51:17.386178] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:04:26.098 [2024-11-26 18:51:17.386775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57054 ] 00:04:26.357 [2024-11-26 18:51:17.587546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.617 [2024-11-26 18:51:17.752166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57054 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57054 ']' 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57054 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57054 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57054' 00:04:31.896 killing process with pid 57054 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57054 00:04:31.896 18:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57054 00:04:33.270 00:04:33.270 real 0m7.306s 00:04:33.270 user 0m6.677s 00:04:33.270 sys 0m0.520s 00:04:33.270 18:51:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.270 ************************************ 00:04:33.270 END TEST skip_rpc 00:04:33.270 ************************************ 00:04:33.270 18:51:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.270 18:51:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:33.270 18:51:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.270 18:51:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.270 18:51:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.270 ************************************ 00:04:33.270 START TEST skip_rpc_with_json 00:04:33.270 ************************************ 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:33.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57158 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57158 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57158 ']' 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.270 18:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.534 [2024-11-26 18:51:24.718372] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:04:33.534 [2024-11-26 18:51:24.718562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57158 ] 00:04:33.830 [2024-11-26 18:51:24.905981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.830 [2024-11-26 18:51:25.043913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.767 [2024-11-26 18:51:25.921794] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.767 request: 00:04:34.767 { 00:04:34.767 "trtype": "tcp", 00:04:34.767 "method": "nvmf_get_transports", 00:04:34.767 "req_id": 1 00:04:34.767 } 00:04:34.767 Got JSON-RPC error response 00:04:34.767 response: 00:04:34.767 { 00:04:34.767 "code": -19, 00:04:34.767 "message": "No such device" 00:04:34.767 } 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.767 [2024-11-26 18:51:25.933928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.767 18:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.767 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.767 18:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.767 { 00:04:34.767 "subsystems": [ 00:04:34.767 { 00:04:34.767 "subsystem": "fsdev", 00:04:34.767 "config": [ 00:04:34.767 { 00:04:34.767 "method": "fsdev_set_opts", 00:04:34.767 "params": { 00:04:34.767 "fsdev_io_pool_size": 65535, 00:04:34.767 "fsdev_io_cache_size": 256 00:04:34.767 } 00:04:34.767 } 00:04:34.767 ] 00:04:34.767 }, 00:04:34.767 { 00:04:34.767 "subsystem": "keyring", 00:04:34.767 "config": [] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "iobuf", 00:04:34.768 "config": [ 00:04:34.768 { 00:04:34.768 "method": "iobuf_set_options", 00:04:34.768 "params": { 00:04:34.768 "small_pool_count": 8192, 00:04:34.768 "large_pool_count": 1024, 00:04:34.768 "small_bufsize": 8192, 00:04:34.768 "large_bufsize": 135168, 00:04:34.768 "enable_numa": false 00:04:34.768 } 00:04:34.768 } 00:04:34.768 ] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "sock", 00:04:34.768 "config": [ 00:04:34.768 { 00:04:34.768 "method": "sock_set_default_impl", 00:04:34.768 "params": { 00:04:34.768 "impl_name": "posix" 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "sock_impl_set_options", 00:04:34.768 "params": { 00:04:34.768 "impl_name": "ssl", 00:04:34.768 "recv_buf_size": 4096, 00:04:34.768 "send_buf_size": 4096, 00:04:34.768 "enable_recv_pipe": true, 00:04:34.768 "enable_quickack": false, 00:04:34.768 "enable_placement_id": 0, 00:04:34.768 "enable_zerocopy_send_server": true, 00:04:34.768 "enable_zerocopy_send_client": false, 00:04:34.768 "zerocopy_threshold": 0, 00:04:34.768 "tls_version": 0, 00:04:34.768 "enable_ktls": false 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "sock_impl_set_options", 00:04:34.768 "params": { 00:04:34.768 "impl_name": "posix", 00:04:34.768 "recv_buf_size": 2097152, 00:04:34.768 "send_buf_size": 2097152, 00:04:34.768 "enable_recv_pipe": true, 00:04:34.768 "enable_quickack": false, 00:04:34.768 "enable_placement_id": 0, 00:04:34.768 "enable_zerocopy_send_server": true, 00:04:34.768 "enable_zerocopy_send_client": false, 00:04:34.768 "zerocopy_threshold": 0, 00:04:34.768 "tls_version": 0, 00:04:34.768 "enable_ktls": false 00:04:34.768 } 00:04:34.768 } 00:04:34.768 ] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "vmd", 00:04:34.768 "config": [] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "accel", 00:04:34.768 "config": [ 00:04:34.768 { 00:04:34.768 "method": "accel_set_options", 00:04:34.768 "params": { 00:04:34.768 "small_cache_size": 128, 00:04:34.768 "large_cache_size": 16, 00:04:34.768 "task_count": 2048, 00:04:34.768 "sequence_count": 2048, 00:04:34.768 "buf_count": 2048 00:04:34.768 } 00:04:34.768 } 00:04:34.768 ] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "bdev", 00:04:34.768 "config": [ 00:04:34.768 { 00:04:34.768 "method": "bdev_set_options", 00:04:34.768 "params": { 00:04:34.768 "bdev_io_pool_size": 65535, 00:04:34.768 "bdev_io_cache_size": 256, 00:04:34.768 "bdev_auto_examine": true, 00:04:34.768 "iobuf_small_cache_size": 128, 00:04:34.768 "iobuf_large_cache_size": 16 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "bdev_raid_set_options", 00:04:34.768 "params": { 00:04:34.768 "process_window_size_kb": 1024, 00:04:34.768 "process_max_bandwidth_mb_sec": 0 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "bdev_iscsi_set_options", 00:04:34.768 "params": { 00:04:34.768 "timeout_sec": 30 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "bdev_nvme_set_options", 00:04:34.768 "params": { 00:04:34.768 "action_on_timeout": "none", 00:04:34.768 "timeout_us": 0, 00:04:34.768 "timeout_admin_us": 0, 00:04:34.768 "keep_alive_timeout_ms": 10000, 00:04:34.768 "arbitration_burst": 0, 00:04:34.768 "low_priority_weight": 0, 00:04:34.768 "medium_priority_weight": 0, 00:04:34.768 "high_priority_weight": 0, 00:04:34.768 "nvme_adminq_poll_period_us": 10000, 00:04:34.768 "nvme_ioq_poll_period_us": 0, 00:04:34.768 "io_queue_requests": 0, 00:04:34.768 "delay_cmd_submit": true, 00:04:34.768 "transport_retry_count": 4, 00:04:34.768 "bdev_retry_count": 3, 00:04:34.768 "transport_ack_timeout": 0, 00:04:34.768 "ctrlr_loss_timeout_sec": 0, 00:04:34.768 "reconnect_delay_sec": 0, 00:04:34.768 "fast_io_fail_timeout_sec": 0, 00:04:34.768 "disable_auto_failback": false, 00:04:34.768 "generate_uuids": false, 00:04:34.768 "transport_tos": 0, 00:04:34.768 "nvme_error_stat": false, 00:04:34.768 "rdma_srq_size": 0, 00:04:34.768 "io_path_stat": false, 00:04:34.768 "allow_accel_sequence": false, 00:04:34.768 "rdma_max_cq_size": 0, 00:04:34.768 "rdma_cm_event_timeout_ms": 0, 00:04:34.768 "dhchap_digests": [ 00:04:34.768 "sha256", 00:04:34.768 "sha384", 00:04:34.768 "sha512" 00:04:34.768 ], 00:04:34.768 "dhchap_dhgroups": [ 00:04:34.768 "null", 00:04:34.768 "ffdhe2048", 00:04:34.768 "ffdhe3072", 00:04:34.768 "ffdhe4096", 00:04:34.768 "ffdhe6144", 00:04:34.768 "ffdhe8192" 00:04:34.768 ] 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "bdev_nvme_set_hotplug", 00:04:34.768 "params": { 00:04:34.768 "period_us": 100000, 00:04:34.768 "enable": false 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "bdev_wait_for_examine" 00:04:34.768 } 00:04:34.768 ] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "scsi", 00:04:34.768 "config": null 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "scheduler", 00:04:34.768 "config": [ 00:04:34.768 { 00:04:34.768 "method": "framework_set_scheduler", 00:04:34.768 "params": { 00:04:34.768 "name": "static" 00:04:34.768 } 00:04:34.768 } 00:04:34.768 ] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "vhost_scsi", 00:04:34.768 "config": [] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "vhost_blk", 00:04:34.768 "config": [] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "ublk", 00:04:34.768 "config": [] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "nbd", 00:04:34.768 "config": [] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "nvmf", 00:04:34.768 "config": [ 00:04:34.768 { 00:04:34.768 "method": "nvmf_set_config", 00:04:34.768 "params": { 00:04:34.768 "discovery_filter": "match_any", 00:04:34.768 "admin_cmd_passthru": { 00:04:34.768 "identify_ctrlr": false 00:04:34.768 }, 00:04:34.768 "dhchap_digests": [ 00:04:34.768 "sha256", 00:04:34.768 "sha384", 00:04:34.768 "sha512" 00:04:34.768 ], 00:04:34.768 "dhchap_dhgroups": [ 00:04:34.768 "null", 00:04:34.768 "ffdhe2048", 00:04:34.768 "ffdhe3072", 00:04:34.768 "ffdhe4096", 00:04:34.768 "ffdhe6144", 00:04:34.768 "ffdhe8192" 00:04:34.768 ] 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "nvmf_set_max_subsystems", 00:04:34.768 "params": { 00:04:34.768 "max_subsystems": 1024 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "nvmf_set_crdt", 00:04:34.768 "params": { 00:04:34.768 "crdt1": 0, 00:04:34.768 "crdt2": 0, 00:04:34.768 "crdt3": 0 00:04:34.768 } 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "method": "nvmf_create_transport", 00:04:34.768 "params": { 00:04:34.768 "trtype": "TCP", 00:04:34.768 "max_queue_depth": 128, 00:04:34.768 "max_io_qpairs_per_ctrlr": 127, 00:04:34.768 "in_capsule_data_size": 4096, 00:04:34.768 "max_io_size": 131072, 00:04:34.768 "io_unit_size": 131072, 00:04:34.768 "max_aq_depth": 128, 00:04:34.768 "num_shared_buffers": 511, 00:04:34.768 "buf_cache_size": 4294967295, 00:04:34.768 "dif_insert_or_strip": false, 00:04:34.768 "zcopy": false, 00:04:34.768 "c2h_success": true, 00:04:34.768 "sock_priority": 0, 00:04:34.768 "abort_timeout_sec": 1, 00:04:34.768 "ack_timeout": 0, 00:04:34.768 "data_wr_pool_size": 0 00:04:34.768 } 00:04:34.768 } 00:04:34.768 ] 00:04:34.768 }, 00:04:34.768 { 00:04:34.768 "subsystem": "iscsi", 00:04:34.768 "config": [ 00:04:34.768 { 00:04:34.768 "method": "iscsi_set_options", 00:04:34.768 "params": { 00:04:34.768 "node_base": "iqn.2016-06.io.spdk", 00:04:34.768 "max_sessions": 128, 00:04:34.768 "max_connections_per_session": 2, 00:04:34.768 "max_queue_depth": 64, 00:04:34.768 "default_time2wait": 2, 00:04:34.768 "default_time2retain": 20, 00:04:34.768 "first_burst_length": 8192, 00:04:34.768 "immediate_data": true, 00:04:34.768 "allow_duplicated_isid": false, 00:04:34.768 "error_recovery_level": 0, 00:04:34.768 "nop_timeout": 60, 00:04:34.768 "nop_in_interval": 30, 00:04:34.768 "disable_chap": false, 00:04:34.768 "require_chap": false, 00:04:34.768 "mutual_chap": false, 00:04:34.768 "chap_group": 0, 00:04:34.768 "max_large_datain_per_connection": 64, 00:04:34.768 "max_r2t_per_connection": 4, 00:04:34.768 "pdu_pool_size": 36864, 00:04:34.768 "immediate_data_pool_size": 16384, 00:04:34.768 "data_out_pool_size": 2048 00:04:34.768 } 00:04:34.768 } 00:04:34.768 ] 00:04:34.768 } 00:04:34.768 ] 00:04:34.768 } 00:04:34.768 18:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:34.768 18:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57158 00:04:34.768 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57158 ']' 00:04:34.769 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57158 00:04:34.769 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:34.769 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.769 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57158 00:04:35.026 killing process with pid 57158 00:04:35.026 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.026 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.026 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57158' 00:04:35.026 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57158 00:04:35.026 18:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57158 00:04:37.559 18:51:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57214 00:04:37.559 18:51:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.559 18:51:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57214 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57214 ']' 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57214 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57214 00:04:42.965 killing process with pid 57214 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57214' 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57214 00:04:42.965 18:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57214 00:04:44.342 18:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:44.342 18:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:44.342 ************************************ 00:04:44.342 END TEST skip_rpc_with_json 00:04:44.342 ************************************ 00:04:44.342 00:04:44.342 real 0m11.062s 00:04:44.342 user 0m10.446s 00:04:44.342 sys 0m1.062s 00:04:44.342 18:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.342 18:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.342 18:51:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:44.342 18:51:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.342 18:51:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.342 18:51:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.342 ************************************ 00:04:44.342 START TEST skip_rpc_with_delay 00:04:44.342 ************************************ 00:04:44.342 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:44.342 18:51:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.342 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:44.342 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.342 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.600 [2024-11-26 18:51:35.819073] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.600 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.601 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.601 00:04:44.601 real 0m0.177s 00:04:44.601 user 0m0.095s 00:04:44.601 sys 0m0.079s 00:04:44.601 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.601 18:51:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:44.601 ************************************ 00:04:44.601 END TEST skip_rpc_with_delay 00:04:44.601 ************************************ 00:04:44.601 18:51:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:44.601 18:51:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:44.601 18:51:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:44.601 18:51:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.601 18:51:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.601 18:51:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.601 ************************************ 00:04:44.601 START TEST exit_on_failed_rpc_init 00:04:44.601 ************************************ 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57342 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57342 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57342 ']' 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.601 18:51:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.859 [2024-11-26 18:51:36.048804] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:04:44.859 [2024-11-26 18:51:36.049004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57342 ] 00:04:44.859 [2024-11-26 18:51:36.219685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.117 [2024-11-26 18:51:36.348636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:46.052 18:51:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.052 [2024-11-26 18:51:37.379257] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:04:46.052 [2024-11-26 18:51:37.379550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57365 ] 00:04:46.311 [2024-11-26 18:51:37.578247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.570 [2024-11-26 18:51:37.731191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.570 [2024-11-26 18:51:37.731341] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:46.570 [2024-11-26 18:51:37.731375] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:46.570 [2024-11-26 18:51:37.731400] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57342 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57342 ']' 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57342 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57342 00:04:46.828 killing process with pid 57342 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57342' 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57342 00:04:46.828 18:51:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57342 00:04:49.425 00:04:49.425 real 0m4.370s 00:04:49.425 user 0m4.888s 00:04:49.425 sys 0m0.702s 00:04:49.425 18:51:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.425 18:51:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.425 ************************************ 00:04:49.425 END TEST exit_on_failed_rpc_init 00:04:49.425 ************************************ 00:04:49.425 18:51:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.425 00:04:49.425 real 0m23.342s 00:04:49.425 user 0m22.304s 00:04:49.425 sys 0m2.574s 00:04:49.425 18:51:40 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.425 18:51:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.425 ************************************ 00:04:49.425 END TEST skip_rpc 00:04:49.425 ************************************ 00:04:49.425 18:51:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.425 18:51:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.425 18:51:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.425 18:51:40 -- common/autotest_common.sh@10 -- # set +x 00:04:49.425 ************************************ 00:04:49.425 START TEST rpc_client 00:04:49.425 ************************************ 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.425 * Looking for test storage... 00:04:49.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.425 18:51:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.425 --rc genhtml_branch_coverage=1 00:04:49.425 --rc genhtml_function_coverage=1 00:04:49.425 --rc genhtml_legend=1 00:04:49.425 --rc geninfo_all_blocks=1 00:04:49.425 --rc geninfo_unexecuted_blocks=1 00:04:49.425 00:04:49.425 ' 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.425 --rc genhtml_branch_coverage=1 00:04:49.425 --rc genhtml_function_coverage=1 00:04:49.425 --rc genhtml_legend=1 00:04:49.425 --rc geninfo_all_blocks=1 00:04:49.425 --rc geninfo_unexecuted_blocks=1 00:04:49.425 00:04:49.425 ' 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.425 --rc genhtml_branch_coverage=1 00:04:49.425 --rc genhtml_function_coverage=1 00:04:49.425 --rc genhtml_legend=1 00:04:49.425 --rc geninfo_all_blocks=1 00:04:49.425 --rc geninfo_unexecuted_blocks=1 00:04:49.425 00:04:49.425 ' 00:04:49.425 18:51:40 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.425 --rc genhtml_branch_coverage=1 00:04:49.425 --rc genhtml_function_coverage=1 00:04:49.425 --rc genhtml_legend=1 00:04:49.425 --rc geninfo_all_blocks=1 00:04:49.425 --rc geninfo_unexecuted_blocks=1 00:04:49.425 00:04:49.425 ' 00:04:49.425 18:51:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:49.425 OK 00:04:49.425 18:51:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:49.426 00:04:49.426 real 0m0.250s 00:04:49.426 user 0m0.131s 00:04:49.426 sys 0m0.126s 00:04:49.426 18:51:40 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.426 18:51:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:49.426 ************************************ 00:04:49.426 END TEST rpc_client 00:04:49.426 ************************************ 00:04:49.426 18:51:40 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:49.426 18:51:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.426 18:51:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.426 18:51:40 -- common/autotest_common.sh@10 -- # set +x 00:04:49.426 ************************************ 00:04:49.426 START TEST json_config 00:04:49.426 ************************************ 00:04:49.426 18:51:40 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:49.426 18:51:40 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.426 18:51:40 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.426 18:51:40 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.684 18:51:40 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.684 18:51:40 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.684 18:51:40 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.684 18:51:40 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.684 18:51:40 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.684 18:51:40 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.684 18:51:40 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.684 18:51:40 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.684 18:51:40 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.684 18:51:40 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.684 18:51:40 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.684 18:51:40 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.684 18:51:40 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:49.684 18:51:40 json_config -- scripts/common.sh@345 -- # : 1 00:04:49.684 18:51:40 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.684 18:51:40 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.684 18:51:40 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:49.684 18:51:40 json_config -- scripts/common.sh@353 -- # local d=1 00:04:49.684 18:51:40 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.684 18:51:40 json_config -- scripts/common.sh@355 -- # echo 1 00:04:49.684 18:51:40 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.684 18:51:40 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:49.684 18:51:40 json_config -- scripts/common.sh@353 -- # local d=2 00:04:49.684 18:51:40 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.684 18:51:40 json_config -- scripts/common.sh@355 -- # echo 2 00:04:49.684 18:51:40 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.684 18:51:40 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.684 18:51:40 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.684 18:51:40 json_config -- scripts/common.sh@368 -- # return 0 00:04:49.684 18:51:40 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.684 18:51:40 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.684 --rc genhtml_branch_coverage=1 00:04:49.684 --rc genhtml_function_coverage=1 00:04:49.684 --rc genhtml_legend=1 00:04:49.684 --rc geninfo_all_blocks=1 00:04:49.684 --rc geninfo_unexecuted_blocks=1 00:04:49.684 00:04:49.684 ' 00:04:49.684 18:51:40 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.684 --rc genhtml_branch_coverage=1 00:04:49.684 --rc genhtml_function_coverage=1 00:04:49.684 --rc genhtml_legend=1 00:04:49.684 --rc geninfo_all_blocks=1 00:04:49.684 --rc geninfo_unexecuted_blocks=1 00:04:49.684 00:04:49.684 ' 00:04:49.684 18:51:40 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.684 --rc genhtml_branch_coverage=1 00:04:49.684 --rc genhtml_function_coverage=1 00:04:49.684 --rc genhtml_legend=1 00:04:49.684 --rc geninfo_all_blocks=1 00:04:49.684 --rc geninfo_unexecuted_blocks=1 00:04:49.684 00:04:49.684 ' 00:04:49.684 18:51:40 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.684 --rc genhtml_branch_coverage=1 00:04:49.684 --rc genhtml_function_coverage=1 00:04:49.684 --rc genhtml_legend=1 00:04:49.684 --rc geninfo_all_blocks=1 00:04:49.684 --rc geninfo_unexecuted_blocks=1 00:04:49.684 00:04:49.684 ' 00:04:49.684 18:51:40 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:41809d3e-876d-42b7-b00f-49485f9c796b 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=41809d3e-876d-42b7-b00f-49485f9c796b 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.684 18:51:40 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.684 18:51:40 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.684 18:51:40 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.684 18:51:40 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.684 18:51:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.684 18:51:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.684 18:51:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.684 18:51:40 json_config -- paths/export.sh@5 -- # export PATH 00:04:49.684 18:51:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@51 -- # : 0 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.684 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.684 18:51:40 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.684 18:51:40 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:49.684 18:51:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:49.684 18:51:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:49.684 18:51:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:49.684 18:51:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:49.684 WARNING: No tests are enabled so not running JSON configuration tests 00:04:49.684 18:51:40 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:49.684 18:51:40 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:49.684 00:04:49.684 real 0m0.203s 00:04:49.684 user 0m0.142s 00:04:49.684 sys 0m0.067s 00:04:49.684 18:51:40 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.684 18:51:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.684 ************************************ 00:04:49.684 END TEST json_config 00:04:49.684 ************************************ 00:04:49.684 18:51:40 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.684 18:51:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.684 18:51:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.684 18:51:40 -- common/autotest_common.sh@10 -- # set +x 00:04:49.684 ************************************ 00:04:49.684 START TEST json_config_extra_key 00:04:49.684 ************************************ 00:04:49.684 18:51:40 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.684 18:51:40 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.684 18:51:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.684 18:51:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.943 18:51:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:49.943 18:51:41 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.943 18:51:41 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.943 --rc genhtml_branch_coverage=1 00:04:49.943 --rc genhtml_function_coverage=1 00:04:49.943 --rc genhtml_legend=1 00:04:49.943 --rc geninfo_all_blocks=1 00:04:49.943 --rc geninfo_unexecuted_blocks=1 00:04:49.943 00:04:49.943 ' 00:04:49.943 18:51:41 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.943 --rc genhtml_branch_coverage=1 00:04:49.943 --rc genhtml_function_coverage=1 00:04:49.943 --rc genhtml_legend=1 00:04:49.943 --rc geninfo_all_blocks=1 00:04:49.943 --rc geninfo_unexecuted_blocks=1 00:04:49.943 00:04:49.943 ' 00:04:49.943 18:51:41 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.943 --rc genhtml_branch_coverage=1 00:04:49.943 --rc genhtml_function_coverage=1 00:04:49.943 --rc genhtml_legend=1 00:04:49.943 --rc geninfo_all_blocks=1 00:04:49.943 --rc geninfo_unexecuted_blocks=1 00:04:49.943 00:04:49.943 ' 00:04:49.943 18:51:41 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.943 --rc genhtml_branch_coverage=1 00:04:49.943 --rc genhtml_function_coverage=1 00:04:49.943 --rc genhtml_legend=1 00:04:49.943 --rc geninfo_all_blocks=1 00:04:49.943 --rc geninfo_unexecuted_blocks=1 00:04:49.943 00:04:49.943 ' 00:04:49.943 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:41809d3e-876d-42b7-b00f-49485f9c796b 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=41809d3e-876d-42b7-b00f-49485f9c796b 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.943 18:51:41 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.943 18:51:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.943 18:51:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.943 18:51:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.943 18:51:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:49.943 18:51:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.943 18:51:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.944 18:51:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.944 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.944 18:51:41 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.944 18:51:41 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.944 18:51:41 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:49.944 INFO: launching applications... 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:49.944 18:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57570 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.944 Waiting for target to run... 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57570 /var/tmp/spdk_tgt.sock 00:04:49.944 18:51:41 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57570 ']' 00:04:49.944 18:51:41 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.944 18:51:41 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.944 18:51:41 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.944 18:51:41 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.944 18:51:41 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.944 18:51:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.944 [2024-11-26 18:51:41.262182] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:04:49.944 [2024-11-26 18:51:41.262359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57570 ] 00:04:50.509 [2024-11-26 18:51:41.734772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.767 [2024-11-26 18:51:41.878368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.333 18:51:42 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.333 00:04:51.333 18:51:42 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:51.333 18:51:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.333 INFO: shutting down applications... 00:04:51.333 18:51:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.333 18:51:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.333 18:51:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.333 18:51:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.333 18:51:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57570 ]] 00:04:51.333 18:51:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57570 00:04:51.333 18:51:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.333 18:51:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.333 18:51:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:04:51.333 18:51:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.899 18:51:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.899 18:51:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.899 18:51:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:04:51.899 18:51:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.464 18:51:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.464 18:51:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.464 18:51:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:04:52.464 18:51:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.032 18:51:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.032 18:51:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.032 18:51:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:04:53.032 18:51:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.291 18:51:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.291 18:51:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.291 18:51:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:04:53.291 18:51:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.855 18:51:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.855 18:51:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.855 18:51:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:04:53.855 18:51:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.422 18:51:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.422 18:51:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.422 18:51:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:04:54.422 18:51:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:54.422 18:51:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:54.422 SPDK target shutdown done 00:04:54.422 18:51:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:54.422 18:51:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:54.422 Success 00:04:54.422 18:51:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:54.422 00:04:54.422 real 0m4.674s 00:04:54.422 user 0m4.119s 00:04:54.422 sys 0m0.638s 00:04:54.422 18:51:45 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.422 18:51:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:54.422 ************************************ 00:04:54.422 END TEST json_config_extra_key 00:04:54.422 ************************************ 00:04:54.422 18:51:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.422 18:51:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.422 18:51:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.422 18:51:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.422 ************************************ 00:04:54.422 START TEST alias_rpc 00:04:54.422 ************************************ 00:04:54.422 18:51:45 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.422 * Looking for test storage... 00:04:54.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:54.422 18:51:45 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.422 18:51:45 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.422 18:51:45 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.681 18:51:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.681 --rc genhtml_branch_coverage=1 00:04:54.681 --rc genhtml_function_coverage=1 00:04:54.681 --rc genhtml_legend=1 00:04:54.681 --rc geninfo_all_blocks=1 00:04:54.681 --rc geninfo_unexecuted_blocks=1 00:04:54.681 00:04:54.681 ' 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.681 --rc genhtml_branch_coverage=1 00:04:54.681 --rc genhtml_function_coverage=1 00:04:54.681 --rc genhtml_legend=1 00:04:54.681 --rc geninfo_all_blocks=1 00:04:54.681 --rc geninfo_unexecuted_blocks=1 00:04:54.681 00:04:54.681 ' 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.681 --rc genhtml_branch_coverage=1 00:04:54.681 --rc genhtml_function_coverage=1 00:04:54.681 --rc genhtml_legend=1 00:04:54.681 --rc geninfo_all_blocks=1 00:04:54.681 --rc geninfo_unexecuted_blocks=1 00:04:54.681 00:04:54.681 ' 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.681 --rc genhtml_branch_coverage=1 00:04:54.681 --rc genhtml_function_coverage=1 00:04:54.681 --rc genhtml_legend=1 00:04:54.681 --rc geninfo_all_blocks=1 00:04:54.681 --rc geninfo_unexecuted_blocks=1 00:04:54.681 00:04:54.681 ' 00:04:54.681 18:51:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.681 18:51:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57687 00:04:54.681 18:51:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.681 18:51:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57687 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57687 ']' 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.681 18:51:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.681 [2024-11-26 18:51:45.977458] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:04:54.681 [2024-11-26 18:51:45.978198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57687 ] 00:04:54.940 [2024-11-26 18:51:46.181829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.199 [2024-11-26 18:51:46.341864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.135 18:51:47 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.135 18:51:47 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:56.135 18:51:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:56.394 18:51:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57687 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57687 ']' 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57687 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57687 00:04:56.394 killing process with pid 57687 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57687' 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@973 -- # kill 57687 00:04:56.394 18:51:47 alias_rpc -- common/autotest_common.sh@978 -- # wait 57687 00:04:58.926 ************************************ 00:04:58.926 END TEST alias_rpc 00:04:58.926 ************************************ 00:04:58.926 00:04:58.926 real 0m4.246s 00:04:58.926 user 0m4.399s 00:04:58.926 sys 0m0.679s 00:04:58.926 18:51:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.926 18:51:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.926 18:51:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:58.926 18:51:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.926 18:51:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.926 18:51:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.926 18:51:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.926 ************************************ 00:04:58.926 START TEST spdkcli_tcp 00:04:58.926 ************************************ 00:04:58.926 18:51:49 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.926 * Looking for test storage... 00:04:58.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:58.926 18:51:50 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.926 18:51:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.926 18:51:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.926 18:51:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.926 18:51:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.926 18:51:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.926 18:51:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.926 --rc genhtml_branch_coverage=1 00:04:58.926 --rc genhtml_function_coverage=1 00:04:58.926 --rc genhtml_legend=1 00:04:58.926 --rc geninfo_all_blocks=1 00:04:58.926 --rc geninfo_unexecuted_blocks=1 00:04:58.926 00:04:58.926 ' 00:04:58.926 18:51:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.926 --rc genhtml_branch_coverage=1 00:04:58.926 --rc genhtml_function_coverage=1 00:04:58.926 --rc genhtml_legend=1 00:04:58.926 --rc geninfo_all_blocks=1 00:04:58.926 --rc geninfo_unexecuted_blocks=1 00:04:58.926 00:04:58.926 ' 00:04:58.926 18:51:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.926 --rc genhtml_branch_coverage=1 00:04:58.926 --rc genhtml_function_coverage=1 00:04:58.926 --rc genhtml_legend=1 00:04:58.926 --rc geninfo_all_blocks=1 00:04:58.926 --rc geninfo_unexecuted_blocks=1 00:04:58.926 00:04:58.927 ' 00:04:58.927 18:51:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.927 --rc genhtml_branch_coverage=1 00:04:58.927 --rc genhtml_function_coverage=1 00:04:58.927 --rc genhtml_legend=1 00:04:58.927 --rc geninfo_all_blocks=1 00:04:58.927 --rc geninfo_unexecuted_blocks=1 00:04:58.927 00:04:58.927 ' 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.927 18:51:50 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.927 18:51:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57794 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.927 18:51:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57794 00:04:58.927 18:51:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57794 ']' 00:04:58.927 18:51:50 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.927 18:51:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.927 18:51:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.927 18:51:50 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.927 18:51:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.186 [2024-11-26 18:51:50.296108] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:04:59.186 [2024-11-26 18:51:50.296524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57794 ] 00:04:59.186 [2024-11-26 18:51:50.486834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.444 [2024-11-26 18:51:50.647268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.444 [2024-11-26 18:51:50.647279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.379 18:51:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.379 18:51:51 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:00.379 18:51:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57811 00:05:00.379 18:51:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:00.379 18:51:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:00.638 [ 00:05:00.638 "bdev_malloc_delete", 00:05:00.638 "bdev_malloc_create", 00:05:00.638 "bdev_null_resize", 00:05:00.638 "bdev_null_delete", 00:05:00.638 "bdev_null_create", 00:05:00.638 "bdev_nvme_cuse_unregister", 00:05:00.638 "bdev_nvme_cuse_register", 00:05:00.638 "bdev_opal_new_user", 00:05:00.638 "bdev_opal_set_lock_state", 00:05:00.638 "bdev_opal_delete", 00:05:00.638 "bdev_opal_get_info", 00:05:00.638 "bdev_opal_create", 00:05:00.638 "bdev_nvme_opal_revert", 00:05:00.638 "bdev_nvme_opal_init", 00:05:00.638 "bdev_nvme_send_cmd", 00:05:00.638 "bdev_nvme_set_keys", 00:05:00.638 "bdev_nvme_get_path_iostat", 00:05:00.638 "bdev_nvme_get_mdns_discovery_info", 00:05:00.638 "bdev_nvme_stop_mdns_discovery", 00:05:00.638 "bdev_nvme_start_mdns_discovery", 00:05:00.638 "bdev_nvme_set_multipath_policy", 00:05:00.638 "bdev_nvme_set_preferred_path", 00:05:00.638 "bdev_nvme_get_io_paths", 00:05:00.638 "bdev_nvme_remove_error_injection", 00:05:00.638 "bdev_nvme_add_error_injection", 00:05:00.638 "bdev_nvme_get_discovery_info", 00:05:00.638 "bdev_nvme_stop_discovery", 00:05:00.638 "bdev_nvme_start_discovery", 00:05:00.638 "bdev_nvme_get_controller_health_info", 00:05:00.638 "bdev_nvme_disable_controller", 00:05:00.638 "bdev_nvme_enable_controller", 00:05:00.638 "bdev_nvme_reset_controller", 00:05:00.638 "bdev_nvme_get_transport_statistics", 00:05:00.638 "bdev_nvme_apply_firmware", 00:05:00.638 "bdev_nvme_detach_controller", 00:05:00.638 "bdev_nvme_get_controllers", 00:05:00.638 "bdev_nvme_attach_controller", 00:05:00.638 "bdev_nvme_set_hotplug", 00:05:00.638 "bdev_nvme_set_options", 00:05:00.638 "bdev_passthru_delete", 00:05:00.638 "bdev_passthru_create", 00:05:00.638 "bdev_lvol_set_parent_bdev", 00:05:00.638 "bdev_lvol_set_parent", 00:05:00.638 "bdev_lvol_check_shallow_copy", 00:05:00.638 "bdev_lvol_start_shallow_copy", 00:05:00.638 "bdev_lvol_grow_lvstore", 00:05:00.638 "bdev_lvol_get_lvols", 00:05:00.638 "bdev_lvol_get_lvstores", 00:05:00.638 "bdev_lvol_delete", 00:05:00.638 "bdev_lvol_set_read_only", 00:05:00.638 "bdev_lvol_resize", 00:05:00.638 "bdev_lvol_decouple_parent", 00:05:00.638 "bdev_lvol_inflate", 00:05:00.638 "bdev_lvol_rename", 00:05:00.638 "bdev_lvol_clone_bdev", 00:05:00.638 "bdev_lvol_clone", 00:05:00.638 "bdev_lvol_snapshot", 00:05:00.638 "bdev_lvol_create", 00:05:00.638 "bdev_lvol_delete_lvstore", 00:05:00.638 "bdev_lvol_rename_lvstore", 00:05:00.638 "bdev_lvol_create_lvstore", 00:05:00.638 "bdev_raid_set_options", 00:05:00.638 "bdev_raid_remove_base_bdev", 00:05:00.638 "bdev_raid_add_base_bdev", 00:05:00.638 "bdev_raid_delete", 00:05:00.638 "bdev_raid_create", 00:05:00.638 "bdev_raid_get_bdevs", 00:05:00.638 "bdev_error_inject_error", 00:05:00.638 "bdev_error_delete", 00:05:00.638 "bdev_error_create", 00:05:00.638 "bdev_split_delete", 00:05:00.638 "bdev_split_create", 00:05:00.638 "bdev_delay_delete", 00:05:00.638 "bdev_delay_create", 00:05:00.638 "bdev_delay_update_latency", 00:05:00.638 "bdev_zone_block_delete", 00:05:00.638 "bdev_zone_block_create", 00:05:00.638 "blobfs_create", 00:05:00.638 "blobfs_detect", 00:05:00.638 "blobfs_set_cache_size", 00:05:00.638 "bdev_aio_delete", 00:05:00.638 "bdev_aio_rescan", 00:05:00.638 "bdev_aio_create", 00:05:00.638 "bdev_ftl_set_property", 00:05:00.638 "bdev_ftl_get_properties", 00:05:00.638 "bdev_ftl_get_stats", 00:05:00.638 "bdev_ftl_unmap", 00:05:00.638 "bdev_ftl_unload", 00:05:00.638 "bdev_ftl_delete", 00:05:00.638 "bdev_ftl_load", 00:05:00.638 "bdev_ftl_create", 00:05:00.638 "bdev_virtio_attach_controller", 00:05:00.638 "bdev_virtio_scsi_get_devices", 00:05:00.638 "bdev_virtio_detach_controller", 00:05:00.638 "bdev_virtio_blk_set_hotplug", 00:05:00.638 "bdev_iscsi_delete", 00:05:00.638 "bdev_iscsi_create", 00:05:00.638 "bdev_iscsi_set_options", 00:05:00.638 "accel_error_inject_error", 00:05:00.638 "ioat_scan_accel_module", 00:05:00.638 "dsa_scan_accel_module", 00:05:00.638 "iaa_scan_accel_module", 00:05:00.638 "keyring_file_remove_key", 00:05:00.638 "keyring_file_add_key", 00:05:00.638 "keyring_linux_set_options", 00:05:00.638 "fsdev_aio_delete", 00:05:00.638 "fsdev_aio_create", 00:05:00.638 "iscsi_get_histogram", 00:05:00.638 "iscsi_enable_histogram", 00:05:00.638 "iscsi_set_options", 00:05:00.638 "iscsi_get_auth_groups", 00:05:00.638 "iscsi_auth_group_remove_secret", 00:05:00.638 "iscsi_auth_group_add_secret", 00:05:00.638 "iscsi_delete_auth_group", 00:05:00.638 "iscsi_create_auth_group", 00:05:00.638 "iscsi_set_discovery_auth", 00:05:00.638 "iscsi_get_options", 00:05:00.638 "iscsi_target_node_request_logout", 00:05:00.638 "iscsi_target_node_set_redirect", 00:05:00.638 "iscsi_target_node_set_auth", 00:05:00.638 "iscsi_target_node_add_lun", 00:05:00.638 "iscsi_get_stats", 00:05:00.638 "iscsi_get_connections", 00:05:00.638 "iscsi_portal_group_set_auth", 00:05:00.638 "iscsi_start_portal_group", 00:05:00.638 "iscsi_delete_portal_group", 00:05:00.638 "iscsi_create_portal_group", 00:05:00.638 "iscsi_get_portal_groups", 00:05:00.638 "iscsi_delete_target_node", 00:05:00.638 "iscsi_target_node_remove_pg_ig_maps", 00:05:00.638 "iscsi_target_node_add_pg_ig_maps", 00:05:00.638 "iscsi_create_target_node", 00:05:00.638 "iscsi_get_target_nodes", 00:05:00.638 "iscsi_delete_initiator_group", 00:05:00.638 "iscsi_initiator_group_remove_initiators", 00:05:00.638 "iscsi_initiator_group_add_initiators", 00:05:00.638 "iscsi_create_initiator_group", 00:05:00.638 "iscsi_get_initiator_groups", 00:05:00.638 "nvmf_set_crdt", 00:05:00.638 "nvmf_set_config", 00:05:00.638 "nvmf_set_max_subsystems", 00:05:00.638 "nvmf_stop_mdns_prr", 00:05:00.638 "nvmf_publish_mdns_prr", 00:05:00.638 "nvmf_subsystem_get_listeners", 00:05:00.638 "nvmf_subsystem_get_qpairs", 00:05:00.638 "nvmf_subsystem_get_controllers", 00:05:00.638 "nvmf_get_stats", 00:05:00.638 "nvmf_get_transports", 00:05:00.638 "nvmf_create_transport", 00:05:00.638 "nvmf_get_targets", 00:05:00.638 "nvmf_delete_target", 00:05:00.638 "nvmf_create_target", 00:05:00.638 "nvmf_subsystem_allow_any_host", 00:05:00.638 "nvmf_subsystem_set_keys", 00:05:00.638 "nvmf_subsystem_remove_host", 00:05:00.638 "nvmf_subsystem_add_host", 00:05:00.638 "nvmf_ns_remove_host", 00:05:00.638 "nvmf_ns_add_host", 00:05:00.638 "nvmf_subsystem_remove_ns", 00:05:00.638 "nvmf_subsystem_set_ns_ana_group", 00:05:00.638 "nvmf_subsystem_add_ns", 00:05:00.638 "nvmf_subsystem_listener_set_ana_state", 00:05:00.638 "nvmf_discovery_get_referrals", 00:05:00.638 "nvmf_discovery_remove_referral", 00:05:00.638 "nvmf_discovery_add_referral", 00:05:00.638 "nvmf_subsystem_remove_listener", 00:05:00.638 "nvmf_subsystem_add_listener", 00:05:00.638 "nvmf_delete_subsystem", 00:05:00.638 "nvmf_create_subsystem", 00:05:00.638 "nvmf_get_subsystems", 00:05:00.638 "env_dpdk_get_mem_stats", 00:05:00.638 "nbd_get_disks", 00:05:00.638 "nbd_stop_disk", 00:05:00.638 "nbd_start_disk", 00:05:00.638 "ublk_recover_disk", 00:05:00.638 "ublk_get_disks", 00:05:00.638 "ublk_stop_disk", 00:05:00.638 "ublk_start_disk", 00:05:00.638 "ublk_destroy_target", 00:05:00.638 "ublk_create_target", 00:05:00.638 "virtio_blk_create_transport", 00:05:00.638 "virtio_blk_get_transports", 00:05:00.638 "vhost_controller_set_coalescing", 00:05:00.638 "vhost_get_controllers", 00:05:00.638 "vhost_delete_controller", 00:05:00.638 "vhost_create_blk_controller", 00:05:00.638 "vhost_scsi_controller_remove_target", 00:05:00.638 "vhost_scsi_controller_add_target", 00:05:00.638 "vhost_start_scsi_controller", 00:05:00.638 "vhost_create_scsi_controller", 00:05:00.638 "thread_set_cpumask", 00:05:00.638 "scheduler_set_options", 00:05:00.638 "framework_get_governor", 00:05:00.638 "framework_get_scheduler", 00:05:00.638 "framework_set_scheduler", 00:05:00.638 "framework_get_reactors", 00:05:00.638 "thread_get_io_channels", 00:05:00.638 "thread_get_pollers", 00:05:00.638 "thread_get_stats", 00:05:00.638 "framework_monitor_context_switch", 00:05:00.638 "spdk_kill_instance", 00:05:00.638 "log_enable_timestamps", 00:05:00.638 "log_get_flags", 00:05:00.638 "log_clear_flag", 00:05:00.638 "log_set_flag", 00:05:00.638 "log_get_level", 00:05:00.638 "log_set_level", 00:05:00.638 "log_get_print_level", 00:05:00.638 "log_set_print_level", 00:05:00.638 "framework_enable_cpumask_locks", 00:05:00.638 "framework_disable_cpumask_locks", 00:05:00.638 "framework_wait_init", 00:05:00.638 "framework_start_init", 00:05:00.638 "scsi_get_devices", 00:05:00.638 "bdev_get_histogram", 00:05:00.638 "bdev_enable_histogram", 00:05:00.638 "bdev_set_qos_limit", 00:05:00.638 "bdev_set_qd_sampling_period", 00:05:00.638 "bdev_get_bdevs", 00:05:00.638 "bdev_reset_iostat", 00:05:00.638 "bdev_get_iostat", 00:05:00.638 "bdev_examine", 00:05:00.638 "bdev_wait_for_examine", 00:05:00.638 "bdev_set_options", 00:05:00.638 "accel_get_stats", 00:05:00.638 "accel_set_options", 00:05:00.638 "accel_set_driver", 00:05:00.638 "accel_crypto_key_destroy", 00:05:00.638 "accel_crypto_keys_get", 00:05:00.638 "accel_crypto_key_create", 00:05:00.638 "accel_assign_opc", 00:05:00.638 "accel_get_module_info", 00:05:00.638 "accel_get_opc_assignments", 00:05:00.638 "vmd_rescan", 00:05:00.638 "vmd_remove_device", 00:05:00.638 "vmd_enable", 00:05:00.638 "sock_get_default_impl", 00:05:00.638 "sock_set_default_impl", 00:05:00.638 "sock_impl_set_options", 00:05:00.638 "sock_impl_get_options", 00:05:00.638 "iobuf_get_stats", 00:05:00.638 "iobuf_set_options", 00:05:00.638 "keyring_get_keys", 00:05:00.638 "framework_get_pci_devices", 00:05:00.638 "framework_get_config", 00:05:00.638 "framework_get_subsystems", 00:05:00.638 "fsdev_set_opts", 00:05:00.638 "fsdev_get_opts", 00:05:00.638 "trace_get_info", 00:05:00.638 "trace_get_tpoint_group_mask", 00:05:00.638 "trace_disable_tpoint_group", 00:05:00.638 "trace_enable_tpoint_group", 00:05:00.638 "trace_clear_tpoint_mask", 00:05:00.638 "trace_set_tpoint_mask", 00:05:00.638 "notify_get_notifications", 00:05:00.638 "notify_get_types", 00:05:00.638 "spdk_get_version", 00:05:00.638 "rpc_get_methods" 00:05:00.638 ] 00:05:00.638 18:51:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:00.638 18:51:51 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.638 18:51:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.638 18:51:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.638 18:51:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57794 00:05:00.638 18:51:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57794 ']' 00:05:00.638 18:51:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57794 00:05:00.638 18:51:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:00.638 18:51:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.638 18:51:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57794 00:05:00.638 killing process with pid 57794 00:05:00.638 18:51:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.639 18:51:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.639 18:51:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57794' 00:05:00.639 18:51:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57794 00:05:00.639 18:51:51 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57794 00:05:03.177 ************************************ 00:05:03.177 END TEST spdkcli_tcp 00:05:03.177 ************************************ 00:05:03.177 00:05:03.177 real 0m4.173s 00:05:03.177 user 0m7.457s 00:05:03.177 sys 0m0.690s 00:05:03.177 18:51:54 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.177 18:51:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.177 18:51:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.177 18:51:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.177 18:51:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.177 18:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:03.177 ************************************ 00:05:03.177 START TEST dpdk_mem_utility 00:05:03.177 ************************************ 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.177 * Looking for test storage... 00:05:03.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.177 18:51:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.177 --rc genhtml_branch_coverage=1 00:05:03.177 --rc genhtml_function_coverage=1 00:05:03.177 --rc genhtml_legend=1 00:05:03.177 --rc geninfo_all_blocks=1 00:05:03.177 --rc geninfo_unexecuted_blocks=1 00:05:03.177 00:05:03.177 ' 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.177 --rc genhtml_branch_coverage=1 00:05:03.177 --rc genhtml_function_coverage=1 00:05:03.177 --rc genhtml_legend=1 00:05:03.177 --rc geninfo_all_blocks=1 00:05:03.177 --rc geninfo_unexecuted_blocks=1 00:05:03.177 00:05:03.177 ' 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.177 --rc genhtml_branch_coverage=1 00:05:03.177 --rc genhtml_function_coverage=1 00:05:03.177 --rc genhtml_legend=1 00:05:03.177 --rc geninfo_all_blocks=1 00:05:03.177 --rc geninfo_unexecuted_blocks=1 00:05:03.177 00:05:03.177 ' 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.177 --rc genhtml_branch_coverage=1 00:05:03.177 --rc genhtml_function_coverage=1 00:05:03.177 --rc genhtml_legend=1 00:05:03.177 --rc geninfo_all_blocks=1 00:05:03.177 --rc geninfo_unexecuted_blocks=1 00:05:03.177 00:05:03.177 ' 00:05:03.177 18:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:03.177 18:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57916 00:05:03.177 18:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.177 18:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57916 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57916 ']' 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.177 18:51:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.177 [2024-11-26 18:51:54.530934] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:03.178 [2024-11-26 18:51:54.531452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57916 ] 00:05:03.435 [2024-11-26 18:51:54.721443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.694 [2024-11-26 18:51:54.856786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.632 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.632 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:04.632 18:51:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.632 18:51:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.632 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.632 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.632 { 00:05:04.632 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.632 } 00:05:04.632 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.632 18:51:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:04.632 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:04.632 1 heaps totaling size 824.000000 MiB 00:05:04.632 size: 824.000000 MiB heap id: 0 00:05:04.632 end heaps---------- 00:05:04.632 9 mempools totaling size 603.782043 MiB 00:05:04.632 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.632 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.632 size: 100.555481 MiB name: bdev_io_57916 00:05:04.632 size: 50.003479 MiB name: msgpool_57916 00:05:04.632 size: 36.509338 MiB name: fsdev_io_57916 00:05:04.632 size: 21.763794 MiB name: PDU_Pool 00:05:04.632 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.632 size: 4.133484 MiB name: evtpool_57916 00:05:04.632 size: 0.026123 MiB name: Session_Pool 00:05:04.632 end mempools------- 00:05:04.632 6 memzones totaling size 4.142822 MiB 00:05:04.632 size: 1.000366 MiB name: RG_ring_0_57916 00:05:04.632 size: 1.000366 MiB name: RG_ring_1_57916 00:05:04.632 size: 1.000366 MiB name: RG_ring_4_57916 00:05:04.632 size: 1.000366 MiB name: RG_ring_5_57916 00:05:04.632 size: 0.125366 MiB name: RG_ring_2_57916 00:05:04.632 size: 0.015991 MiB name: RG_ring_3_57916 00:05:04.632 end memzones------- 00:05:04.632 18:51:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.632 heap id: 0 total size: 824.000000 MiB number of busy elements: 310 number of free elements: 18 00:05:04.632 list of free elements. size: 16.782593 MiB 00:05:04.632 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:04.632 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:04.632 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:04.632 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:04.632 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:04.632 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:04.632 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:04.632 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:04.632 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:04.632 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:04.632 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:04.632 element at address: 0x20001b400000 with size: 0.563416 MiB 00:05:04.632 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:04.632 element at address: 0x200019600000 with size: 0.488464 MiB 00:05:04.632 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:04.632 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:04.632 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:04.632 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:04.632 list of standard malloc elements. size: 199.286499 MiB 00:05:04.632 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:04.632 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:04.632 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:04.632 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:04.632 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:04.632 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:04.632 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:04.632 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:04.632 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:04.632 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:04.632 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:04.632 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:04.632 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:04.633 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:04.634 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:04.634 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:04.634 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:04.634 list of memzone associated elements. size: 607.930908 MiB 00:05:04.634 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:04.634 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.634 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:04.634 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.634 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:04.634 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57916_0 00:05:04.634 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:04.634 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57916_0 00:05:04.634 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:04.634 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57916_0 00:05:04.634 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:04.634 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.634 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:04.634 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.634 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:04.634 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57916_0 00:05:04.634 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:04.635 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57916 00:05:04.635 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:04.635 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57916 00:05:04.635 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:04.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.635 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:04.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.635 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:04.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.635 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:04.635 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.635 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:04.635 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57916 00:05:04.635 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:04.635 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57916 00:05:04.635 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:04.635 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57916 00:05:04.635 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:04.635 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57916 00:05:04.635 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:04.635 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57916 00:05:04.635 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:04.635 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57916 00:05:04.635 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:04.635 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.635 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:04.635 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.635 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:04.635 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.635 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:04.635 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57916 00:05:04.635 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:04.635 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57916 00:05:04.635 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:04.635 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.635 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:04.635 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.635 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:04.635 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57916 00:05:04.635 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:04.635 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.635 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:04.635 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57916 00:05:04.635 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:04.635 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57916 00:05:04.635 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:04.635 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57916 00:05:04.635 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:04.635 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.635 18:51:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.635 18:51:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57916 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57916 ']' 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57916 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57916 00:05:04.635 killing process with pid 57916 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57916' 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57916 00:05:04.635 18:51:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57916 00:05:07.172 00:05:07.172 real 0m4.048s 00:05:07.172 user 0m4.077s 00:05:07.172 sys 0m0.657s 00:05:07.172 18:51:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.172 ************************************ 00:05:07.172 END TEST dpdk_mem_utility 00:05:07.172 ************************************ 00:05:07.172 18:51:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.173 18:51:58 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.173 18:51:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.173 18:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.173 18:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:07.173 ************************************ 00:05:07.173 START TEST event 00:05:07.173 ************************************ 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.173 * Looking for test storage... 00:05:07.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.173 18:51:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.173 18:51:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.173 18:51:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.173 18:51:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.173 18:51:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.173 18:51:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.173 18:51:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.173 18:51:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.173 18:51:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.173 18:51:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.173 18:51:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.173 18:51:58 event -- scripts/common.sh@344 -- # case "$op" in 00:05:07.173 18:51:58 event -- scripts/common.sh@345 -- # : 1 00:05:07.173 18:51:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.173 18:51:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.173 18:51:58 event -- scripts/common.sh@365 -- # decimal 1 00:05:07.173 18:51:58 event -- scripts/common.sh@353 -- # local d=1 00:05:07.173 18:51:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.173 18:51:58 event -- scripts/common.sh@355 -- # echo 1 00:05:07.173 18:51:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.173 18:51:58 event -- scripts/common.sh@366 -- # decimal 2 00:05:07.173 18:51:58 event -- scripts/common.sh@353 -- # local d=2 00:05:07.173 18:51:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.173 18:51:58 event -- scripts/common.sh@355 -- # echo 2 00:05:07.173 18:51:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.173 18:51:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.173 18:51:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.173 18:51:58 event -- scripts/common.sh@368 -- # return 0 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.173 --rc genhtml_branch_coverage=1 00:05:07.173 --rc genhtml_function_coverage=1 00:05:07.173 --rc genhtml_legend=1 00:05:07.173 --rc geninfo_all_blocks=1 00:05:07.173 --rc geninfo_unexecuted_blocks=1 00:05:07.173 00:05:07.173 ' 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.173 --rc genhtml_branch_coverage=1 00:05:07.173 --rc genhtml_function_coverage=1 00:05:07.173 --rc genhtml_legend=1 00:05:07.173 --rc geninfo_all_blocks=1 00:05:07.173 --rc geninfo_unexecuted_blocks=1 00:05:07.173 00:05:07.173 ' 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.173 --rc genhtml_branch_coverage=1 00:05:07.173 --rc genhtml_function_coverage=1 00:05:07.173 --rc genhtml_legend=1 00:05:07.173 --rc geninfo_all_blocks=1 00:05:07.173 --rc geninfo_unexecuted_blocks=1 00:05:07.173 00:05:07.173 ' 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.173 --rc genhtml_branch_coverage=1 00:05:07.173 --rc genhtml_function_coverage=1 00:05:07.173 --rc genhtml_legend=1 00:05:07.173 --rc geninfo_all_blocks=1 00:05:07.173 --rc geninfo_unexecuted_blocks=1 00:05:07.173 00:05:07.173 ' 00:05:07.173 18:51:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:07.173 18:51:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:07.173 18:51:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:07.173 18:51:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.173 18:51:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.173 ************************************ 00:05:07.173 START TEST event_perf 00:05:07.173 ************************************ 00:05:07.173 18:51:58 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.173 Running I/O for 1 seconds...[2024-11-26 18:51:58.531176] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:07.173 [2024-11-26 18:51:58.531505] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58024 ] 00:05:07.432 [2024-11-26 18:51:58.723678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.691 [2024-11-26 18:51:58.889667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.691 [2024-11-26 18:51:58.889819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.691 [2024-11-26 18:51:58.889949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.691 [2024-11-26 18:51:58.889959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.071 Running I/O for 1 seconds... 00:05:09.071 lcore 0: 189640 00:05:09.071 lcore 1: 189639 00:05:09.071 lcore 2: 189640 00:05:09.071 lcore 3: 189641 00:05:09.071 done. 00:05:09.071 00:05:09.071 real 0m1.647s 00:05:09.071 user 0m4.383s 00:05:09.071 sys 0m0.132s 00:05:09.071 18:52:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.071 18:52:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.071 ************************************ 00:05:09.071 END TEST event_perf 00:05:09.071 ************************************ 00:05:09.071 18:52:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.071 18:52:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:09.071 18:52:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.071 18:52:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.071 ************************************ 00:05:09.071 START TEST event_reactor 00:05:09.071 ************************************ 00:05:09.071 18:52:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.071 [2024-11-26 18:52:00.225489] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:09.071 [2024-11-26 18:52:00.225646] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58069 ] 00:05:09.071 [2024-11-26 18:52:00.407198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.330 [2024-11-26 18:52:00.571193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.706 test_start 00:05:10.706 oneshot 00:05:10.706 tick 100 00:05:10.706 tick 100 00:05:10.706 tick 250 00:05:10.706 tick 100 00:05:10.706 tick 100 00:05:10.706 tick 100 00:05:10.706 tick 250 00:05:10.706 tick 500 00:05:10.706 tick 100 00:05:10.706 tick 100 00:05:10.706 tick 250 00:05:10.706 tick 100 00:05:10.706 tick 100 00:05:10.706 test_end 00:05:10.706 00:05:10.706 real 0m1.622s 00:05:10.706 user 0m1.409s 00:05:10.706 sys 0m0.104s 00:05:10.706 ************************************ 00:05:10.706 END TEST event_reactor 00:05:10.706 ************************************ 00:05:10.706 18:52:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.706 18:52:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:10.706 18:52:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.706 18:52:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:10.706 18:52:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.706 18:52:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.706 ************************************ 00:05:10.706 START TEST event_reactor_perf 00:05:10.706 ************************************ 00:05:10.706 18:52:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.706 [2024-11-26 18:52:01.905924] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:10.706 [2024-11-26 18:52:01.906278] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58100 ] 00:05:10.965 [2024-11-26 18:52:02.099793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.965 [2024-11-26 18:52:02.276149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.342 test_start 00:05:12.342 test_end 00:05:12.342 Performance: 267676 events per second 00:05:12.342 00:05:12.342 real 0m1.658s 00:05:12.342 user 0m1.438s 00:05:12.342 sys 0m0.109s 00:05:12.342 18:52:03 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.342 18:52:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.342 ************************************ 00:05:12.342 END TEST event_reactor_perf 00:05:12.342 ************************************ 00:05:12.342 18:52:03 event -- event/event.sh@49 -- # uname -s 00:05:12.342 18:52:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:12.342 18:52:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.342 18:52:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.342 18:52:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.342 18:52:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.342 ************************************ 00:05:12.342 START TEST event_scheduler 00:05:12.342 ************************************ 00:05:12.342 18:52:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.342 * Looking for test storage... 00:05:12.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:12.342 18:52:03 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.342 18:52:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.342 18:52:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.601 18:52:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:12.601 18:52:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.602 18:52:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:12.602 18:52:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:12.602 18:52:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.602 18:52:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:12.602 18:52:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.602 18:52:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.602 18:52:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.602 18:52:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.602 --rc genhtml_branch_coverage=1 00:05:12.602 --rc genhtml_function_coverage=1 00:05:12.602 --rc genhtml_legend=1 00:05:12.602 --rc geninfo_all_blocks=1 00:05:12.602 --rc geninfo_unexecuted_blocks=1 00:05:12.602 00:05:12.602 ' 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.602 --rc genhtml_branch_coverage=1 00:05:12.602 --rc genhtml_function_coverage=1 00:05:12.602 --rc genhtml_legend=1 00:05:12.602 --rc geninfo_all_blocks=1 00:05:12.602 --rc geninfo_unexecuted_blocks=1 00:05:12.602 00:05:12.602 ' 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.602 --rc genhtml_branch_coverage=1 00:05:12.602 --rc genhtml_function_coverage=1 00:05:12.602 --rc genhtml_legend=1 00:05:12.602 --rc geninfo_all_blocks=1 00:05:12.602 --rc geninfo_unexecuted_blocks=1 00:05:12.602 00:05:12.602 ' 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.602 --rc genhtml_branch_coverage=1 00:05:12.602 --rc genhtml_function_coverage=1 00:05:12.602 --rc genhtml_legend=1 00:05:12.602 --rc geninfo_all_blocks=1 00:05:12.602 --rc geninfo_unexecuted_blocks=1 00:05:12.602 00:05:12.602 ' 00:05:12.602 18:52:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:12.602 18:52:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58176 00:05:12.602 18:52:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:12.602 18:52:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.602 18:52:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58176 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58176 ']' 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.602 18:52:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.602 [2024-11-26 18:52:03.915615] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:12.602 [2024-11-26 18:52:03.916318] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58176 ] 00:05:12.861 [2024-11-26 18:52:04.110484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.120 [2024-11-26 18:52:04.248080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.120 [2024-11-26 18:52:04.248144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.120 [2024-11-26 18:52:04.248256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.120 [2024-11-26 18:52:04.248256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.688 18:52:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.688 18:52:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:13.688 18:52:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:13.688 18:52:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.688 18:52:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.688 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.688 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.688 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.688 POWER: Cannot set governor of lcore 0 to performance 00:05:13.688 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.688 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.688 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.688 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.688 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:13.688 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:13.688 POWER: Unable to set Power Management Environment for lcore 0 00:05:13.688 [2024-11-26 18:52:04.942802] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:13.688 [2024-11-26 18:52:04.942832] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:13.688 [2024-11-26 18:52:04.942849] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:13.688 [2024-11-26 18:52:04.942877] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:13.688 [2024-11-26 18:52:04.942891] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:13.688 [2024-11-26 18:52:04.942925] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:13.688 18:52:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.688 18:52:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:13.688 18:52:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.688 18:52:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.946 [2024-11-26 18:52:05.278392] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:13.946 18:52:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.946 18:52:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:13.946 18:52:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.946 18:52:05 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.946 18:52:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.946 ************************************ 00:05:13.946 START TEST scheduler_create_thread 00:05:13.946 ************************************ 00:05:13.946 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:13.946 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:13.946 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.946 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 2 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 3 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 4 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 5 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 6 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 7 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 8 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 9 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 10 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.206 18:52:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.582 18:52:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.582 18:52:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:15.582 18:52:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:15.582 18:52:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.582 18:52:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.956 18:52:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.956 ************************************ 00:05:16.956 END TEST scheduler_create_thread 00:05:16.956 ************************************ 00:05:16.956 00:05:16.956 real 0m2.625s 00:05:16.956 user 0m0.022s 00:05:16.956 sys 0m0.004s 00:05:16.956 18:52:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.956 18:52:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.956 18:52:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:16.956 18:52:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58176 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58176 ']' 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58176 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58176 00:05:16.956 killing process with pid 58176 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58176' 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58176 00:05:16.956 18:52:07 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58176 00:05:17.213 [2024-11-26 18:52:08.396958] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:18.587 00:05:18.587 real 0m5.976s 00:05:18.587 user 0m10.611s 00:05:18.587 sys 0m0.549s 00:05:18.587 18:52:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.587 18:52:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.587 ************************************ 00:05:18.587 END TEST event_scheduler 00:05:18.587 ************************************ 00:05:18.587 18:52:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:18.587 18:52:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:18.587 18:52:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.587 18:52:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.587 18:52:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.587 ************************************ 00:05:18.587 START TEST app_repeat 00:05:18.587 ************************************ 00:05:18.587 18:52:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:18.587 Process app_repeat pid: 58293 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58293 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58293' 00:05:18.587 spdk_app_start Round 0 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:18.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.587 18:52:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58293 /var/tmp/spdk-nbd.sock 00:05:18.587 18:52:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58293 ']' 00:05:18.587 18:52:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.587 18:52:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.587 18:52:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.587 18:52:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.587 18:52:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.587 [2024-11-26 18:52:09.658052] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:18.587 [2024-11-26 18:52:09.658202] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58293 ] 00:05:18.587 [2024-11-26 18:52:09.840453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.846 [2024-11-26 18:52:09.996992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.846 [2024-11-26 18:52:09.996995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.414 18:52:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.414 18:52:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.414 18:52:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.979 Malloc0 00:05:19.979 18:52:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.237 Malloc1 00:05:20.237 18:52:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.237 18:52:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.495 /dev/nbd0 00:05:20.495 18:52:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.495 18:52:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.495 18:52:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.495 18:52:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.495 18:52:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.495 18:52:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.495 18:52:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.495 18:52:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.495 18:52:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.495 18:52:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.496 18:52:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.496 1+0 records in 00:05:20.496 1+0 records out 00:05:20.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390833 s, 10.5 MB/s 00:05:20.496 18:52:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.496 18:52:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.496 18:52:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.496 18:52:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.496 18:52:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.496 18:52:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.496 18:52:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.496 18:52:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.754 /dev/nbd1 00:05:20.754 18:52:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.754 18:52:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.754 18:52:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.754 18:52:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.754 18:52:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.754 18:52:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.754 18:52:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.754 18:52:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.754 18:52:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.754 18:52:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.755 18:52:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.755 1+0 records in 00:05:20.755 1+0 records out 00:05:20.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284433 s, 14.4 MB/s 00:05:20.755 18:52:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.755 18:52:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.755 18:52:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.755 18:52:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.755 18:52:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.755 18:52:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.755 18:52:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.755 18:52:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.755 18:52:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.755 18:52:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.323 { 00:05:21.323 "nbd_device": "/dev/nbd0", 00:05:21.323 "bdev_name": "Malloc0" 00:05:21.323 }, 00:05:21.323 { 00:05:21.323 "nbd_device": "/dev/nbd1", 00:05:21.323 "bdev_name": "Malloc1" 00:05:21.323 } 00:05:21.323 ]' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.323 { 00:05:21.323 "nbd_device": "/dev/nbd0", 00:05:21.323 "bdev_name": "Malloc0" 00:05:21.323 }, 00:05:21.323 { 00:05:21.323 "nbd_device": "/dev/nbd1", 00:05:21.323 "bdev_name": "Malloc1" 00:05:21.323 } 00:05:21.323 ]' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.323 /dev/nbd1' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.323 /dev/nbd1' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.323 256+0 records in 00:05:21.323 256+0 records out 00:05:21.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108057 s, 97.0 MB/s 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.323 256+0 records in 00:05:21.323 256+0 records out 00:05:21.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295187 s, 35.5 MB/s 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.323 256+0 records in 00:05:21.323 256+0 records out 00:05:21.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333021 s, 31.5 MB/s 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.323 18:52:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.581 18:52:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.148 18:52:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.149 18:52:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.407 18:52:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.407 18:52:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.019 18:52:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.954 [2024-11-26 18:52:15.168756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.954 [2024-11-26 18:52:15.298007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.954 [2024-11-26 18:52:15.298017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.213 [2024-11-26 18:52:15.489062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.213 [2024-11-26 18:52:15.489184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.113 spdk_app_start Round 1 00:05:26.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.113 18:52:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.113 18:52:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:26.113 18:52:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58293 /var/tmp/spdk-nbd.sock 00:05:26.113 18:52:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58293 ']' 00:05:26.113 18:52:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.114 18:52:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.114 18:52:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.114 18:52:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.114 18:52:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.114 18:52:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.114 18:52:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.114 18:52:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.371 Malloc0 00:05:26.371 18:52:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.937 Malloc1 00:05:26.937 18:52:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.937 18:52:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.196 /dev/nbd0 00:05:27.196 18:52:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.196 18:52:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.196 1+0 records in 00:05:27.196 1+0 records out 00:05:27.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265306 s, 15.4 MB/s 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.196 18:52:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.197 18:52:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.197 18:52:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.197 18:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.197 18:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.197 18:52:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.455 /dev/nbd1 00:05:27.455 18:52:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.455 18:52:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.455 1+0 records in 00:05:27.455 1+0 records out 00:05:27.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316417 s, 12.9 MB/s 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.455 18:52:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.455 18:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.455 18:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.455 18:52:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.455 18:52:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.455 18:52:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.712 18:52:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.712 { 00:05:27.712 "nbd_device": "/dev/nbd0", 00:05:27.712 "bdev_name": "Malloc0" 00:05:27.712 }, 00:05:27.712 { 00:05:27.712 "nbd_device": "/dev/nbd1", 00:05:27.712 "bdev_name": "Malloc1" 00:05:27.712 } 00:05:27.712 ]' 00:05:27.712 18:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.712 { 00:05:27.712 "nbd_device": "/dev/nbd0", 00:05:27.712 "bdev_name": "Malloc0" 00:05:27.712 }, 00:05:27.712 { 00:05:27.712 "nbd_device": "/dev/nbd1", 00:05:27.712 "bdev_name": "Malloc1" 00:05:27.712 } 00:05:27.712 ]' 00:05:27.712 18:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.970 /dev/nbd1' 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.970 /dev/nbd1' 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.970 256+0 records in 00:05:27.970 256+0 records out 00:05:27.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073538 s, 143 MB/s 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.970 256+0 records in 00:05:27.970 256+0 records out 00:05:27.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270008 s, 38.8 MB/s 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.970 256+0 records in 00:05:27.970 256+0 records out 00:05:27.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341919 s, 30.7 MB/s 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.970 18:52:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.228 18:52:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.486 18:52:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.743 18:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.743 18:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.743 18:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.001 18:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.001 18:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.001 18:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.001 18:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.001 18:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.001 18:52:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.001 18:52:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.001 18:52:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.001 18:52:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.001 18:52:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.259 18:52:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.634 [2024-11-26 18:52:21.676719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.634 [2024-11-26 18:52:21.802530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.634 [2024-11-26 18:52:21.802533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.634 [2024-11-26 18:52:21.993835] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.634 [2024-11-26 18:52:21.993914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.537 spdk_app_start Round 2 00:05:32.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.537 18:52:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.537 18:52:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:32.537 18:52:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58293 /var/tmp/spdk-nbd.sock 00:05:32.537 18:52:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58293 ']' 00:05:32.537 18:52:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.537 18:52:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.537 18:52:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.537 18:52:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.537 18:52:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.796 18:52:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.796 18:52:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.796 18:52:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.054 Malloc0 00:05:33.054 18:52:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.312 Malloc1 00:05:33.312 18:52:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.312 18:52:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.312 18:52:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.312 18:52:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.312 18:52:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.312 18:52:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.312 18:52:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.312 18:52:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.312 18:52:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.313 18:52:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.313 18:52:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.313 18:52:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.313 18:52:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.313 18:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.313 18:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.313 18:52:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.571 /dev/nbd0 00:05:33.571 18:52:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.571 18:52:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.571 18:52:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:33.571 18:52:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:33.571 18:52:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.571 18:52:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.571 18:52:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:33.830 18:52:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:33.830 18:52:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.830 18:52:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.830 18:52:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.830 1+0 records in 00:05:33.830 1+0 records out 00:05:33.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350511 s, 11.7 MB/s 00:05:33.830 18:52:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.830 18:52:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:33.830 18:52:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.830 18:52:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.830 18:52:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:33.830 18:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.830 18:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.830 18:52:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.091 /dev/nbd1 00:05:34.091 18:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.091 18:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.091 1+0 records in 00:05:34.091 1+0 records out 00:05:34.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463531 s, 8.8 MB/s 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.091 18:52:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.091 18:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.091 18:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.091 18:52:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.091 18:52:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.091 18:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.350 { 00:05:34.350 "nbd_device": "/dev/nbd0", 00:05:34.350 "bdev_name": "Malloc0" 00:05:34.350 }, 00:05:34.350 { 00:05:34.350 "nbd_device": "/dev/nbd1", 00:05:34.350 "bdev_name": "Malloc1" 00:05:34.350 } 00:05:34.350 ]' 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.350 { 00:05:34.350 "nbd_device": "/dev/nbd0", 00:05:34.350 "bdev_name": "Malloc0" 00:05:34.350 }, 00:05:34.350 { 00:05:34.350 "nbd_device": "/dev/nbd1", 00:05:34.350 "bdev_name": "Malloc1" 00:05:34.350 } 00:05:34.350 ]' 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.350 /dev/nbd1' 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.350 /dev/nbd1' 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.350 256+0 records in 00:05:34.350 256+0 records out 00:05:34.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109688 s, 95.6 MB/s 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.350 18:52:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.609 256+0 records in 00:05:34.609 256+0 records out 00:05:34.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298694 s, 35.1 MB/s 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.609 256+0 records in 00:05:34.609 256+0 records out 00:05:34.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291684 s, 35.9 MB/s 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.609 18:52:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.867 18:52:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.126 18:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.384 18:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.384 18:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.384 18:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.643 18:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.643 18:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.643 18:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.643 18:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.643 18:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.643 18:52:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.643 18:52:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.643 18:52:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.643 18:52:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.643 18:52:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.919 18:52:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.291 [2024-11-26 18:52:28.362163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.291 [2024-11-26 18:52:28.491062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.291 [2024-11-26 18:52:28.491073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.549 [2024-11-26 18:52:28.682431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.549 [2024-11-26 18:52:28.682569] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.957 18:52:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58293 /var/tmp/spdk-nbd.sock 00:05:38.957 18:52:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58293 ']' 00:05:38.957 18:52:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.957 18:52:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.957 18:52:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.957 18:52:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.957 18:52:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.216 18:52:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.216 18:52:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:39.216 18:52:30 event.app_repeat -- event/event.sh@39 -- # killprocess 58293 00:05:39.216 18:52:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58293 ']' 00:05:39.216 18:52:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58293 00:05:39.216 18:52:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:39.216 18:52:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.216 18:52:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58293 00:05:39.476 18:52:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.476 18:52:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.476 killing process with pid 58293 00:05:39.476 18:52:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58293' 00:05:39.476 18:52:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58293 00:05:39.476 18:52:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58293 00:05:40.411 spdk_app_start is called in Round 0. 00:05:40.411 Shutdown signal received, stop current app iteration 00:05:40.411 Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 reinitialization... 00:05:40.411 spdk_app_start is called in Round 1. 00:05:40.411 Shutdown signal received, stop current app iteration 00:05:40.411 Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 reinitialization... 00:05:40.411 spdk_app_start is called in Round 2. 00:05:40.411 Shutdown signal received, stop current app iteration 00:05:40.411 Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 reinitialization... 00:05:40.411 spdk_app_start is called in Round 3. 00:05:40.411 Shutdown signal received, stop current app iteration 00:05:40.411 18:52:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:40.411 18:52:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:40.411 00:05:40.411 real 0m21.983s 00:05:40.411 user 0m48.897s 00:05:40.411 sys 0m3.130s 00:05:40.411 18:52:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.411 18:52:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.411 ************************************ 00:05:40.412 END TEST app_repeat 00:05:40.412 ************************************ 00:05:40.412 18:52:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:40.412 18:52:31 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:40.412 18:52:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.412 18:52:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.412 18:52:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.412 ************************************ 00:05:40.412 START TEST cpu_locks 00:05:40.412 ************************************ 00:05:40.412 18:52:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:40.412 * Looking for test storage... 00:05:40.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:40.412 18:52:31 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.412 18:52:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.412 18:52:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.669 18:52:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.669 18:52:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.669 18:52:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.669 18:52:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.670 18:52:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:40.670 18:52:31 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.670 18:52:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.670 --rc genhtml_branch_coverage=1 00:05:40.670 --rc genhtml_function_coverage=1 00:05:40.670 --rc genhtml_legend=1 00:05:40.670 --rc geninfo_all_blocks=1 00:05:40.670 --rc geninfo_unexecuted_blocks=1 00:05:40.670 00:05:40.670 ' 00:05:40.670 18:52:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.670 --rc genhtml_branch_coverage=1 00:05:40.670 --rc genhtml_function_coverage=1 00:05:40.670 --rc genhtml_legend=1 00:05:40.670 --rc geninfo_all_blocks=1 00:05:40.670 --rc geninfo_unexecuted_blocks=1 00:05:40.670 00:05:40.670 ' 00:05:40.670 18:52:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.670 --rc genhtml_branch_coverage=1 00:05:40.670 --rc genhtml_function_coverage=1 00:05:40.670 --rc genhtml_legend=1 00:05:40.670 --rc geninfo_all_blocks=1 00:05:40.670 --rc geninfo_unexecuted_blocks=1 00:05:40.670 00:05:40.670 ' 00:05:40.670 18:52:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.670 --rc genhtml_branch_coverage=1 00:05:40.670 --rc genhtml_function_coverage=1 00:05:40.670 --rc genhtml_legend=1 00:05:40.670 --rc geninfo_all_blocks=1 00:05:40.670 --rc geninfo_unexecuted_blocks=1 00:05:40.670 00:05:40.670 ' 00:05:40.670 18:52:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:40.670 18:52:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:40.670 18:52:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:40.670 18:52:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:40.670 18:52:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.670 18:52:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.670 18:52:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.670 ************************************ 00:05:40.670 START TEST default_locks 00:05:40.670 ************************************ 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58768 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58768 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58768 ']' 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.670 18:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.670 [2024-11-26 18:52:31.926316] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:40.670 [2024-11-26 18:52:31.926481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58768 ] 00:05:40.927 [2024-11-26 18:52:32.102275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.927 [2024-11-26 18:52:32.230406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.861 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.861 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:41.861 18:52:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58768 00:05:41.861 18:52:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58768 00:05:41.861 18:52:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58768 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58768 ']' 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58768 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58768 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.429 killing process with pid 58768 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58768' 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58768 00:05:42.429 18:52:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58768 00:05:44.963 18:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58768 00:05:44.963 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:44.963 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58768 00:05:44.963 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:44.963 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58768 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58768 ']' 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.964 ERROR: process (pid: 58768) is no longer running 00:05:44.964 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58768) - No such process 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.964 00:05:44.964 real 0m3.988s 00:05:44.964 user 0m4.038s 00:05:44.964 sys 0m0.729s 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.964 18:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.964 ************************************ 00:05:44.964 END TEST default_locks 00:05:44.964 ************************************ 00:05:44.964 18:52:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.964 18:52:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.964 18:52:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.964 18:52:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.964 ************************************ 00:05:44.964 START TEST default_locks_via_rpc 00:05:44.964 ************************************ 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58845 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58845 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58845 ']' 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.964 18:52:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.964 [2024-11-26 18:52:36.004049] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:44.964 [2024-11-26 18:52:36.004229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58845 ] 00:05:44.964 [2024-11-26 18:52:36.189875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.964 [2024-11-26 18:52:36.322648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58845 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58845 00:05:45.901 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58845 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58845 ']' 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58845 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58845 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.469 killing process with pid 58845 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58845' 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58845 00:05:46.469 18:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58845 00:05:49.001 00:05:49.001 real 0m4.087s 00:05:49.001 user 0m4.156s 00:05:49.001 sys 0m0.733s 00:05:49.001 18:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.001 18:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.001 ************************************ 00:05:49.001 END TEST default_locks_via_rpc 00:05:49.001 ************************************ 00:05:49.001 18:52:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.001 18:52:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.001 18:52:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.001 18:52:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.001 ************************************ 00:05:49.001 START TEST non_locking_app_on_locked_coremask 00:05:49.001 ************************************ 00:05:49.001 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:49.002 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58919 00:05:49.002 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58919 /var/tmp/spdk.sock 00:05:49.002 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.002 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58919 ']' 00:05:49.002 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.002 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.002 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.002 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.002 18:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.002 [2024-11-26 18:52:40.130338] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:49.002 [2024-11-26 18:52:40.130548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:05:49.002 [2024-11-26 18:52:40.313382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.262 [2024-11-26 18:52:40.452243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58935 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58935 /var/tmp/spdk2.sock 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58935 ']' 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.197 18:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.197 [2024-11-26 18:52:41.481673] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:05:50.197 [2024-11-26 18:52:41.481864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58935 ] 00:05:50.457 [2024-11-26 18:52:41.689971] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.457 [2024-11-26 18:52:41.690034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.742 [2024-11-26 18:52:41.960729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.278 18:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.278 18:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.278 18:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58919 00:05:53.278 18:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58919 00:05:53.278 18:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58919 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58919 ']' 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58919 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58919 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.845 killing process with pid 58919 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58919' 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58919 00:05:53.845 18:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58919 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58935 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58935 ']' 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58935 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58935 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.137 killing process with pid 58935 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58935' 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58935 00:05:59.137 18:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58935 00:06:00.513 00:06:00.513 real 0m11.864s 00:06:00.513 user 0m12.364s 00:06:00.513 sys 0m1.576s 00:06:00.513 18:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.513 18:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.513 ************************************ 00:06:00.513 END TEST non_locking_app_on_locked_coremask 00:06:00.513 ************************************ 00:06:00.773 18:52:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.773 18:52:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.773 18:52:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.773 18:52:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 ************************************ 00:06:00.773 START TEST locking_app_on_unlocked_coremask 00:06:00.773 ************************************ 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59086 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59086 /var/tmp/spdk.sock 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59086 ']' 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 18:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.773 [2024-11-26 18:52:52.034471] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:00.773 [2024-11-26 18:52:52.034655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59086 ] 00:06:01.033 [2024-11-26 18:52:52.210662] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.033 [2024-11-26 18:52:52.210766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.033 [2024-11-26 18:52:52.346747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59108 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59108 /var/tmp/spdk2.sock 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59108 ']' 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.970 18:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.230 [2024-11-26 18:52:53.391158] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:02.230 [2024-11-26 18:52:53.392033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59108 ] 00:06:02.493 [2024-11-26 18:52:53.598649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.757 [2024-11-26 18:52:53.874504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.293 18:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.293 18:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.293 18:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59108 00:06:05.293 18:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59108 00:06:05.293 18:52:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59086 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59086 ']' 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59086 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59086 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59086' 00:06:05.930 killing process with pid 59086 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59086 00:06:05.930 18:52:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59086 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59108 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59108 ']' 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59108 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59108 00:06:11.200 killing process with pid 59108 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59108' 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59108 00:06:11.200 18:53:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59108 00:06:13.101 00:06:13.101 real 0m12.163s 00:06:13.101 user 0m12.749s 00:06:13.101 sys 0m1.639s 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.101 ************************************ 00:06:13.101 END TEST locking_app_on_unlocked_coremask 00:06:13.101 ************************************ 00:06:13.101 18:53:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.101 18:53:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.101 18:53:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.101 18:53:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.101 ************************************ 00:06:13.101 START TEST locking_app_on_locked_coremask 00:06:13.101 ************************************ 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59261 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59261 /var/tmp/spdk.sock 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59261 ']' 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.101 18:53:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.101 [2024-11-26 18:53:04.257986] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:13.101 [2024-11-26 18:53:04.258888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59261 ] 00:06:13.101 [2024-11-26 18:53:04.445573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.359 [2024-11-26 18:53:04.577616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59283 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59283 /var/tmp/spdk2.sock 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59283 /var/tmp/spdk2.sock 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59283 /var/tmp/spdk2.sock 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59283 ']' 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.293 18:53:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.293 [2024-11-26 18:53:05.613195] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:14.293 [2024-11-26 18:53:05.613373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59283 ] 00:06:14.551 [2024-11-26 18:53:05.831974] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59261 has claimed it. 00:06:14.551 [2024-11-26 18:53:05.832080] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.116 ERROR: process (pid: 59283) is no longer running 00:06:15.116 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59283) - No such process 00:06:15.116 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.116 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:15.116 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:15.116 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.116 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.116 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.116 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59261 00:06:15.116 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59261 00:06:15.116 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.374 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59261 00:06:15.374 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59261 ']' 00:06:15.374 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59261 00:06:15.374 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.374 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.374 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59261 00:06:15.633 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.633 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.633 killing process with pid 59261 00:06:15.633 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59261' 00:06:15.633 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59261 00:06:15.633 18:53:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59261 00:06:18.165 00:06:18.165 real 0m4.888s 00:06:18.165 user 0m5.241s 00:06:18.165 sys 0m0.968s 00:06:18.165 18:53:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.165 18:53:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.165 ************************************ 00:06:18.165 END TEST locking_app_on_locked_coremask 00:06:18.165 ************************************ 00:06:18.165 18:53:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.165 18:53:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.165 18:53:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.165 18:53:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.165 ************************************ 00:06:18.165 START TEST locking_overlapped_coremask 00:06:18.165 ************************************ 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59347 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59347 /var/tmp/spdk.sock 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59347 ']' 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.165 18:53:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.165 [2024-11-26 18:53:09.206727] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:18.165 [2024-11-26 18:53:09.206985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59347 ] 00:06:18.165 [2024-11-26 18:53:09.395290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.424 [2024-11-26 18:53:09.535375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.424 [2024-11-26 18:53:09.535445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.424 [2024-11-26 18:53:09.535457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59371 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59371 /var/tmp/spdk2.sock 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59371 /var/tmp/spdk2.sock 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59371 /var/tmp/spdk2.sock 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59371 ']' 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.358 18:53:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 [2024-11-26 18:53:10.534309] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:19.358 [2024-11-26 18:53:10.534655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59371 ] 00:06:19.616 [2024-11-26 18:53:10.730582] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59347 has claimed it. 00:06:19.616 [2024-11-26 18:53:10.730678] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.876 ERROR: process (pid: 59371) is no longer running 00:06:19.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59371) - No such process 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59347 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59347 ']' 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59347 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59347 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.877 killing process with pid 59347 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59347' 00:06:19.877 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59347 00:06:20.259 18:53:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59347 00:06:22.201 00:06:22.202 real 0m4.424s 00:06:22.202 user 0m11.954s 00:06:22.202 sys 0m0.690s 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.202 ************************************ 00:06:22.202 END TEST locking_overlapped_coremask 00:06:22.202 ************************************ 00:06:22.202 18:53:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.202 18:53:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.202 18:53:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.202 18:53:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.202 ************************************ 00:06:22.202 START TEST locking_overlapped_coremask_via_rpc 00:06:22.202 ************************************ 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59442 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59442 /var/tmp/spdk.sock 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59442 ']' 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.202 18:53:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.459 [2024-11-26 18:53:13.675881] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:22.459 [2024-11-26 18:53:13.676574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59442 ] 00:06:22.716 [2024-11-26 18:53:13.865476] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.716 [2024-11-26 18:53:13.865552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.716 [2024-11-26 18:53:14.004420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.716 [2024-11-26 18:53:14.004523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.716 [2024-11-26 18:53:14.004536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59460 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59460 /var/tmp/spdk2.sock 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59460 ']' 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.650 18:53:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.908 [2024-11-26 18:53:15.036966] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:23.908 [2024-11-26 18:53:15.037771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59460 ] 00:06:23.908 [2024-11-26 18:53:15.243365] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.908 [2024-11-26 18:53:15.243446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.166 [2024-11-26 18:53:15.514776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.166 [2024-11-26 18:53:15.514920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.166 [2024-11-26 18:53:15.514955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.699 18:53:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.699 [2024-11-26 18:53:17.993278] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59442 has claimed it. 00:06:26.699 request: 00:06:26.699 { 00:06:26.699 "method": "framework_enable_cpumask_locks", 00:06:26.699 "req_id": 1 00:06:26.699 } 00:06:26.699 Got JSON-RPC error response 00:06:26.699 response: 00:06:26.699 { 00:06:26.699 "code": -32603, 00:06:26.699 "message": "Failed to claim CPU core: 2" 00:06:26.699 } 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59442 /var/tmp/spdk.sock 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59442 ']' 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.699 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.958 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.958 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:26.958 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59460 /var/tmp/spdk2.sock 00:06:26.958 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59460 ']' 00:06:26.958 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.958 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.958 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.958 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.958 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.526 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.526 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:27.526 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:27.526 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.526 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.527 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.527 00:06:27.527 real 0m5.096s 00:06:27.527 user 0m2.032s 00:06:27.527 sys 0m0.259s 00:06:27.527 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.527 ************************************ 00:06:27.527 END TEST locking_overlapped_coremask_via_rpc 00:06:27.527 ************************************ 00:06:27.527 18:53:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.527 18:53:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:27.527 18:53:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59442 ]] 00:06:27.527 18:53:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59442 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59442 ']' 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59442 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59442 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.527 killing process with pid 59442 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59442' 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59442 00:06:27.527 18:53:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59442 00:06:30.105 18:53:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59460 ]] 00:06:30.105 18:53:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59460 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59460 ']' 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59460 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59460 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:30.105 killing process with pid 59460 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59460' 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59460 00:06:30.105 18:53:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59460 00:06:32.009 18:53:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.009 18:53:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.009 18:53:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59442 ]] 00:06:32.009 18:53:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59442 00:06:32.009 18:53:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59442 ']' 00:06:32.009 18:53:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59442 00:06:32.009 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59442) - No such process 00:06:32.009 18:53:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59442 is not found' 00:06:32.009 Process with pid 59442 is not found 00:06:32.009 18:53:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59460 ]] 00:06:32.009 18:53:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59460 00:06:32.009 18:53:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59460 ']' 00:06:32.009 18:53:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59460 00:06:32.009 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59460) - No such process 00:06:32.009 Process with pid 59460 is not found 00:06:32.009 18:53:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59460 is not found' 00:06:32.009 18:53:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.009 ************************************ 00:06:32.009 END TEST cpu_locks 00:06:32.009 ************************************ 00:06:32.009 00:06:32.009 real 0m51.650s 00:06:32.009 user 1m29.981s 00:06:32.009 sys 0m7.862s 00:06:32.009 18:53:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.009 18:53:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.009 00:06:32.009 real 1m25.041s 00:06:32.009 user 2m36.941s 00:06:32.009 sys 0m12.150s 00:06:32.009 18:53:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.009 ************************************ 00:06:32.009 END TEST event 00:06:32.009 18:53:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.009 ************************************ 00:06:32.009 18:53:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.009 18:53:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.009 18:53:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.009 18:53:23 -- common/autotest_common.sh@10 -- # set +x 00:06:32.009 ************************************ 00:06:32.009 START TEST thread 00:06:32.009 ************************************ 00:06:32.009 18:53:23 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.268 * Looking for test storage... 00:06:32.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:32.268 18:53:23 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.268 18:53:23 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.268 18:53:23 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.268 18:53:23 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.268 18:53:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.268 18:53:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.268 18:53:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.268 18:53:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.268 18:53:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.268 18:53:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.269 18:53:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.269 18:53:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.269 18:53:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.269 18:53:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.269 18:53:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.269 18:53:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:32.269 18:53:23 thread -- scripts/common.sh@345 -- # : 1 00:06:32.269 18:53:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.269 18:53:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.269 18:53:23 thread -- scripts/common.sh@365 -- # decimal 1 00:06:32.269 18:53:23 thread -- scripts/common.sh@353 -- # local d=1 00:06:32.269 18:53:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.269 18:53:23 thread -- scripts/common.sh@355 -- # echo 1 00:06:32.269 18:53:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.269 18:53:23 thread -- scripts/common.sh@366 -- # decimal 2 00:06:32.269 18:53:23 thread -- scripts/common.sh@353 -- # local d=2 00:06:32.269 18:53:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.269 18:53:23 thread -- scripts/common.sh@355 -- # echo 2 00:06:32.269 18:53:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.269 18:53:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.269 18:53:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.269 18:53:23 thread -- scripts/common.sh@368 -- # return 0 00:06:32.269 18:53:23 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.269 18:53:23 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.269 --rc genhtml_branch_coverage=1 00:06:32.269 --rc genhtml_function_coverage=1 00:06:32.269 --rc genhtml_legend=1 00:06:32.269 --rc geninfo_all_blocks=1 00:06:32.269 --rc geninfo_unexecuted_blocks=1 00:06:32.269 00:06:32.269 ' 00:06:32.269 18:53:23 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.269 --rc genhtml_branch_coverage=1 00:06:32.269 --rc genhtml_function_coverage=1 00:06:32.269 --rc genhtml_legend=1 00:06:32.269 --rc geninfo_all_blocks=1 00:06:32.269 --rc geninfo_unexecuted_blocks=1 00:06:32.269 00:06:32.269 ' 00:06:32.269 18:53:23 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.269 --rc genhtml_branch_coverage=1 00:06:32.269 --rc genhtml_function_coverage=1 00:06:32.269 --rc genhtml_legend=1 00:06:32.269 --rc geninfo_all_blocks=1 00:06:32.269 --rc geninfo_unexecuted_blocks=1 00:06:32.269 00:06:32.269 ' 00:06:32.269 18:53:23 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.269 --rc genhtml_branch_coverage=1 00:06:32.269 --rc genhtml_function_coverage=1 00:06:32.269 --rc genhtml_legend=1 00:06:32.269 --rc geninfo_all_blocks=1 00:06:32.269 --rc geninfo_unexecuted_blocks=1 00:06:32.269 00:06:32.269 ' 00:06:32.269 18:53:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.269 18:53:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:32.269 18:53:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.269 18:53:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.269 ************************************ 00:06:32.269 START TEST thread_poller_perf 00:06:32.269 ************************************ 00:06:32.269 18:53:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.269 [2024-11-26 18:53:23.604804] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:32.269 [2024-11-26 18:53:23.605161] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59655 ] 00:06:32.588 [2024-11-26 18:53:23.785940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.846 [2024-11-26 18:53:23.956708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.846 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.223 [2024-11-26T18:53:25.590Z] ====================================== 00:06:34.223 [2024-11-26T18:53:25.590Z] busy:2213929268 (cyc) 00:06:34.223 [2024-11-26T18:53:25.590Z] total_run_count: 297000 00:06:34.223 [2024-11-26T18:53:25.590Z] tsc_hz: 2200000000 (cyc) 00:06:34.223 [2024-11-26T18:53:25.590Z] ====================================== 00:06:34.223 [2024-11-26T18:53:25.590Z] poller_cost: 7454 (cyc), 3388 (nsec) 00:06:34.223 00:06:34.223 real 0m1.638s 00:06:34.223 user 0m1.417s 00:06:34.223 sys 0m0.110s 00:06:34.223 18:53:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.223 18:53:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.223 ************************************ 00:06:34.223 END TEST thread_poller_perf 00:06:34.223 ************************************ 00:06:34.223 18:53:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.223 18:53:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:34.223 18:53:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.223 18:53:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.223 ************************************ 00:06:34.223 START TEST thread_poller_perf 00:06:34.223 ************************************ 00:06:34.223 18:53:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.223 [2024-11-26 18:53:25.300733] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:34.223 [2024-11-26 18:53:25.301105] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59697 ] 00:06:34.223 [2024-11-26 18:53:25.477602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.482 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:34.482 [2024-11-26 18:53:25.608451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.856 [2024-11-26T18:53:27.223Z] ====================================== 00:06:35.856 [2024-11-26T18:53:27.223Z] busy:2204725587 (cyc) 00:06:35.856 [2024-11-26T18:53:27.223Z] total_run_count: 3794000 00:06:35.856 [2024-11-26T18:53:27.223Z] tsc_hz: 2200000000 (cyc) 00:06:35.856 [2024-11-26T18:53:27.223Z] ====================================== 00:06:35.856 [2024-11-26T18:53:27.223Z] poller_cost: 581 (cyc), 264 (nsec) 00:06:35.856 00:06:35.856 real 0m1.580s 00:06:35.856 user 0m1.378s 00:06:35.856 sys 0m0.092s 00:06:35.856 18:53:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.856 18:53:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.856 ************************************ 00:06:35.856 END TEST thread_poller_perf 00:06:35.856 ************************************ 00:06:35.856 18:53:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:35.856 ************************************ 00:06:35.856 END TEST thread 00:06:35.856 ************************************ 00:06:35.856 00:06:35.856 real 0m3.513s 00:06:35.856 user 0m2.952s 00:06:35.856 sys 0m0.341s 00:06:35.856 18:53:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.856 18:53:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.856 18:53:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:35.856 18:53:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:35.856 18:53:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.856 18:53:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.856 18:53:26 -- common/autotest_common.sh@10 -- # set +x 00:06:35.856 ************************************ 00:06:35.856 START TEST app_cmdline 00:06:35.856 ************************************ 00:06:35.856 18:53:26 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:35.856 * Looking for test storage... 00:06:35.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:35.856 18:53:27 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.856 18:53:27 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.856 18:53:27 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.856 18:53:27 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.856 18:53:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.856 18:53:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.856 18:53:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.856 18:53:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.856 18:53:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.856 18:53:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.856 18:53:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.856 18:53:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.856 18:53:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.857 18:53:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.857 --rc genhtml_branch_coverage=1 00:06:35.857 --rc genhtml_function_coverage=1 00:06:35.857 --rc genhtml_legend=1 00:06:35.857 --rc geninfo_all_blocks=1 00:06:35.857 --rc geninfo_unexecuted_blocks=1 00:06:35.857 00:06:35.857 ' 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.857 --rc genhtml_branch_coverage=1 00:06:35.857 --rc genhtml_function_coverage=1 00:06:35.857 --rc genhtml_legend=1 00:06:35.857 --rc geninfo_all_blocks=1 00:06:35.857 --rc geninfo_unexecuted_blocks=1 00:06:35.857 00:06:35.857 ' 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.857 --rc genhtml_branch_coverage=1 00:06:35.857 --rc genhtml_function_coverage=1 00:06:35.857 --rc genhtml_legend=1 00:06:35.857 --rc geninfo_all_blocks=1 00:06:35.857 --rc geninfo_unexecuted_blocks=1 00:06:35.857 00:06:35.857 ' 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.857 --rc genhtml_branch_coverage=1 00:06:35.857 --rc genhtml_function_coverage=1 00:06:35.857 --rc genhtml_legend=1 00:06:35.857 --rc geninfo_all_blocks=1 00:06:35.857 --rc geninfo_unexecuted_blocks=1 00:06:35.857 00:06:35.857 ' 00:06:35.857 18:53:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:35.857 18:53:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59786 00:06:35.857 18:53:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59786 00:06:35.857 18:53:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59786 ']' 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.857 18:53:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.117 [2024-11-26 18:53:27.262917] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:36.117 [2024-11-26 18:53:27.263322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59786 ] 00:06:36.117 [2024-11-26 18:53:27.445704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.376 [2024-11-26 18:53:27.606404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.368 18:53:28 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.368 18:53:28 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:37.368 18:53:28 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:37.643 { 00:06:37.643 "version": "SPDK v25.01-pre git sha1 658cb4c04", 00:06:37.643 "fields": { 00:06:37.643 "major": 25, 00:06:37.643 "minor": 1, 00:06:37.643 "patch": 0, 00:06:37.643 "suffix": "-pre", 00:06:37.643 "commit": "658cb4c04" 00:06:37.643 } 00:06:37.643 } 00:06:37.643 18:53:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.643 18:53:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.643 18:53:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.643 18:53:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.643 18:53:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.643 18:53:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.643 18:53:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.643 18:53:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.643 18:53:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.644 18:53:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.644 18:53:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.644 18:53:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:37.644 18:53:28 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.902 request: 00:06:37.902 { 00:06:37.902 "method": "env_dpdk_get_mem_stats", 00:06:37.902 "req_id": 1 00:06:37.902 } 00:06:37.902 Got JSON-RPC error response 00:06:37.902 response: 00:06:37.902 { 00:06:37.902 "code": -32601, 00:06:37.902 "message": "Method not found" 00:06:37.902 } 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.902 18:53:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59786 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59786 ']' 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59786 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59786 00:06:37.902 killing process with pid 59786 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59786' 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 59786 00:06:37.902 18:53:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 59786 00:06:40.433 00:06:40.433 real 0m4.396s 00:06:40.433 user 0m4.899s 00:06:40.433 sys 0m0.680s 00:06:40.433 ************************************ 00:06:40.433 END TEST app_cmdline 00:06:40.433 ************************************ 00:06:40.433 18:53:31 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.433 18:53:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.433 18:53:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.433 18:53:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.433 18:53:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.433 18:53:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.433 ************************************ 00:06:40.433 START TEST version 00:06:40.433 ************************************ 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.433 * Looking for test storage... 00:06:40.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.433 18:53:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.433 18:53:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.433 18:53:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.433 18:53:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.433 18:53:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.433 18:53:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.433 18:53:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.433 18:53:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.433 18:53:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.433 18:53:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.433 18:53:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.433 18:53:31 version -- scripts/common.sh@344 -- # case "$op" in 00:06:40.433 18:53:31 version -- scripts/common.sh@345 -- # : 1 00:06:40.433 18:53:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.433 18:53:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.433 18:53:31 version -- scripts/common.sh@365 -- # decimal 1 00:06:40.433 18:53:31 version -- scripts/common.sh@353 -- # local d=1 00:06:40.433 18:53:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.433 18:53:31 version -- scripts/common.sh@355 -- # echo 1 00:06:40.433 18:53:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.433 18:53:31 version -- scripts/common.sh@366 -- # decimal 2 00:06:40.433 18:53:31 version -- scripts/common.sh@353 -- # local d=2 00:06:40.433 18:53:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.433 18:53:31 version -- scripts/common.sh@355 -- # echo 2 00:06:40.433 18:53:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.433 18:53:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.433 18:53:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.433 18:53:31 version -- scripts/common.sh@368 -- # return 0 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.433 --rc genhtml_branch_coverage=1 00:06:40.433 --rc genhtml_function_coverage=1 00:06:40.433 --rc genhtml_legend=1 00:06:40.433 --rc geninfo_all_blocks=1 00:06:40.433 --rc geninfo_unexecuted_blocks=1 00:06:40.433 00:06:40.433 ' 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.433 --rc genhtml_branch_coverage=1 00:06:40.433 --rc genhtml_function_coverage=1 00:06:40.433 --rc genhtml_legend=1 00:06:40.433 --rc geninfo_all_blocks=1 00:06:40.433 --rc geninfo_unexecuted_blocks=1 00:06:40.433 00:06:40.433 ' 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.433 --rc genhtml_branch_coverage=1 00:06:40.433 --rc genhtml_function_coverage=1 00:06:40.433 --rc genhtml_legend=1 00:06:40.433 --rc geninfo_all_blocks=1 00:06:40.433 --rc geninfo_unexecuted_blocks=1 00:06:40.433 00:06:40.433 ' 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.433 --rc genhtml_branch_coverage=1 00:06:40.433 --rc genhtml_function_coverage=1 00:06:40.433 --rc genhtml_legend=1 00:06:40.433 --rc geninfo_all_blocks=1 00:06:40.433 --rc geninfo_unexecuted_blocks=1 00:06:40.433 00:06:40.433 ' 00:06:40.433 18:53:31 version -- app/version.sh@17 -- # get_header_version major 00:06:40.433 18:53:31 version -- app/version.sh@14 -- # cut -f2 00:06:40.433 18:53:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.433 18:53:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.433 18:53:31 version -- app/version.sh@17 -- # major=25 00:06:40.433 18:53:31 version -- app/version.sh@18 -- # get_header_version minor 00:06:40.433 18:53:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.433 18:53:31 version -- app/version.sh@14 -- # cut -f2 00:06:40.433 18:53:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.433 18:53:31 version -- app/version.sh@18 -- # minor=1 00:06:40.433 18:53:31 version -- app/version.sh@19 -- # get_header_version patch 00:06:40.433 18:53:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.433 18:53:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.433 18:53:31 version -- app/version.sh@14 -- # cut -f2 00:06:40.433 18:53:31 version -- app/version.sh@19 -- # patch=0 00:06:40.433 18:53:31 version -- app/version.sh@20 -- # get_header_version suffix 00:06:40.433 18:53:31 version -- app/version.sh@14 -- # cut -f2 00:06:40.433 18:53:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.433 18:53:31 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.433 18:53:31 version -- app/version.sh@20 -- # suffix=-pre 00:06:40.433 18:53:31 version -- app/version.sh@22 -- # version=25.1 00:06:40.433 18:53:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:40.433 18:53:31 version -- app/version.sh@28 -- # version=25.1rc0 00:06:40.433 18:53:31 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:40.433 18:53:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:40.433 18:53:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:40.433 18:53:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:40.433 00:06:40.433 real 0m0.263s 00:06:40.433 user 0m0.180s 00:06:40.433 sys 0m0.118s 00:06:40.433 ************************************ 00:06:40.433 END TEST version 00:06:40.433 ************************************ 00:06:40.433 18:53:31 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.433 18:53:31 version -- common/autotest_common.sh@10 -- # set +x 00:06:40.433 18:53:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:40.433 18:53:31 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:40.433 18:53:31 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:40.433 18:53:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.433 18:53:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.433 18:53:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.433 ************************************ 00:06:40.433 START TEST bdev_raid 00:06:40.433 ************************************ 00:06:40.433 18:53:31 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:40.433 * Looking for test storage... 00:06:40.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:40.433 18:53:31 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.433 18:53:31 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.433 18:53:31 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.692 18:53:31 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.692 18:53:31 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:40.692 18:53:31 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.692 18:53:31 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.692 --rc genhtml_branch_coverage=1 00:06:40.692 --rc genhtml_function_coverage=1 00:06:40.692 --rc genhtml_legend=1 00:06:40.692 --rc geninfo_all_blocks=1 00:06:40.692 --rc geninfo_unexecuted_blocks=1 00:06:40.692 00:06:40.692 ' 00:06:40.692 18:53:31 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.692 --rc genhtml_branch_coverage=1 00:06:40.692 --rc genhtml_function_coverage=1 00:06:40.692 --rc genhtml_legend=1 00:06:40.692 --rc geninfo_all_blocks=1 00:06:40.692 --rc geninfo_unexecuted_blocks=1 00:06:40.692 00:06:40.692 ' 00:06:40.692 18:53:31 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.692 --rc genhtml_branch_coverage=1 00:06:40.692 --rc genhtml_function_coverage=1 00:06:40.692 --rc genhtml_legend=1 00:06:40.692 --rc geninfo_all_blocks=1 00:06:40.692 --rc geninfo_unexecuted_blocks=1 00:06:40.692 00:06:40.692 ' 00:06:40.692 18:53:31 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.692 --rc genhtml_branch_coverage=1 00:06:40.692 --rc genhtml_function_coverage=1 00:06:40.692 --rc genhtml_legend=1 00:06:40.692 --rc geninfo_all_blocks=1 00:06:40.692 --rc geninfo_unexecuted_blocks=1 00:06:40.692 00:06:40.692 ' 00:06:40.692 18:53:31 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.692 18:53:31 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.692 18:53:31 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:40.692 18:53:31 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:40.692 18:53:31 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:40.692 18:53:31 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:40.692 18:53:31 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:40.692 18:53:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.692 18:53:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.692 18:53:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.692 ************************************ 00:06:40.692 START TEST raid1_resize_data_offset_test 00:06:40.692 ************************************ 00:06:40.692 18:53:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59974 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59974' 00:06:40.693 Process raid pid: 59974 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59974 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59974 ']' 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.693 18:53:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.693 [2024-11-26 18:53:31.977963] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:40.693 [2024-11-26 18:53:31.978426] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.951 [2024-11-26 18:53:32.168068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.210 [2024-11-26 18:53:32.322855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.210 [2024-11-26 18:53:32.538003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.210 [2024-11-26 18:53:32.538284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.777 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.777 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:41.777 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:41.777 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.777 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.777 malloc0 00:06:41.777 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.777 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:41.777 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.777 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.111 malloc1 00:06:42.111 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.111 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:42.111 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.111 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.111 null0 00:06:42.111 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.111 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:42.111 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.111 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.111 [2024-11-26 18:53:33.187310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:42.111 [2024-11-26 18:53:33.189989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:42.111 [2024-11-26 18:53:33.190076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:42.111 [2024-11-26 18:53:33.190260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.111 [2024-11-26 18:53:33.190281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:42.111 [2024-11-26 18:53:33.190629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:42.111 [2024-11-26 18:53:33.190823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.111 [2024-11-26 18:53:33.190842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:42.111 [2024-11-26 18:53:33.191260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.111 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.112 [2024-11-26 18:53:33.255348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.112 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.694 malloc2 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.694 [2024-11-26 18:53:33.803739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:42.694 [2024-11-26 18:53:33.820341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.694 [2024-11-26 18:53:33.823310] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59974 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59974 ']' 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59974 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59974 00:06:42.694 killing process with pid 59974 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59974' 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59974 00:06:42.694 18:53:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59974 00:06:42.694 [2024-11-26 18:53:33.898986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.694 [2024-11-26 18:53:33.900316] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:42.694 [2024-11-26 18:53:33.900402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.694 [2024-11-26 18:53:33.900428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:42.694 [2024-11-26 18:53:33.933679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.694 [2024-11-26 18:53:33.934118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.694 [2024-11-26 18:53:33.934142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:44.598 [2024-11-26 18:53:35.572424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.536 ************************************ 00:06:45.536 END TEST raid1_resize_data_offset_test 00:06:45.536 ************************************ 00:06:45.536 18:53:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:45.536 00:06:45.536 real 0m4.782s 00:06:45.536 user 0m4.751s 00:06:45.536 sys 0m0.652s 00:06:45.536 18:53:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.536 18:53:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.536 18:53:36 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:45.536 18:53:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.536 18:53:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.536 18:53:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.536 ************************************ 00:06:45.536 START TEST raid0_resize_superblock_test 00:06:45.536 ************************************ 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:45.536 Process raid pid: 60057 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60057 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60057' 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60057 00:06:45.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60057 ']' 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.536 18:53:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.536 [2024-11-26 18:53:36.840049] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:45.536 [2024-11-26 18:53:36.840646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.795 [2024-11-26 18:53:37.030010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.054 [2024-11-26 18:53:37.189703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.054 [2024-11-26 18:53:37.404959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.054 [2024-11-26 18:53:37.405235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.618 18:53:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.618 18:53:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:46.618 18:53:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:46.618 18:53:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.618 18:53:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.200 malloc0 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.200 [2024-11-26 18:53:38.395858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:47.200 [2024-11-26 18:53:38.395958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.200 [2024-11-26 18:53:38.396006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:47.200 [2024-11-26 18:53:38.396035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.200 [2024-11-26 18:53:38.399322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.200 [2024-11-26 18:53:38.399508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:47.200 pt0 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.200 bae38e81-3007-40b7-8bf9-6f1b0e561ae9 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.200 835f895c-e7da-4263-aae4-d2ee5f89f73c 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.200 d7922c47-2ca5-420e-94c7-ef4602d0c9b2 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.200 [2024-11-26 18:53:38.540747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 835f895c-e7da-4263-aae4-d2ee5f89f73c is claimed 00:06:47.200 [2024-11-26 18:53:38.540870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d7922c47-2ca5-420e-94c7-ef4602d0c9b2 is claimed 00:06:47.200 [2024-11-26 18:53:38.541081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:47.200 [2024-11-26 18:53:38.541107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:47.200 [2024-11-26 18:53:38.541459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:47.200 [2024-11-26 18:53:38.541770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:47.200 [2024-11-26 18:53:38.541786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:47.200 [2024-11-26 18:53:38.542003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.200 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:47.460 [2024-11-26 18:53:38.649166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.460 [2024-11-26 18:53:38.701173] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:47.460 [2024-11-26 18:53:38.701337] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '835f895c-e7da-4263-aae4-d2ee5f89f73c' was resized: old size 131072, new size 204800 00:06:47.460 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.461 [2024-11-26 18:53:38.709048] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:47.461 [2024-11-26 18:53:38.709080] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd7922c47-2ca5-420e-94c7-ef4602d0c9b2' was resized: old size 131072, new size 204800 00:06:47.461 [2024-11-26 18:53:38.709129] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.461 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.720 [2024-11-26 18:53:38.829190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.720 [2024-11-26 18:53:38.872938] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:47.720 [2024-11-26 18:53:38.873183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:47.720 [2024-11-26 18:53:38.873356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:47.720 [2024-11-26 18:53:38.873490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:47.720 [2024-11-26 18:53:38.873745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.720 [2024-11-26 18:53:38.873945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.720 [2024-11-26 18:53:38.874132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:47.720 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.721 [2024-11-26 18:53:38.880775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:47.721 [2024-11-26 18:53:38.880982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.721 [2024-11-26 18:53:38.881033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:47.721 [2024-11-26 18:53:38.881061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.721 [2024-11-26 18:53:38.884242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.721 [2024-11-26 18:53:38.884411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:47.721 pt0 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.721 [2024-11-26 18:53:38.887608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 835f895c-e7da-4263-aae4-d2ee5f89f73c 00:06:47.721 [2024-11-26 18:53:38.887836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 835f895c-e7da-4263-aae4-d2ee5f89f73c is claimed 00:06:47.721 [2024-11-26 18:53:38.888194] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d7922c47-2ca5-420e-94c7-ef4602d0c9b2 00:06:47.721 [2024-11-26 18:53:38.888353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d7922c47-2ca5-420e-94c7-ef4602d0c9b2 is claimed 00:06:47.721 [2024-11-26 18:53:38.888693] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d7922c47-2ca5-420e-94c7-ef4602d0c9b2 (2) smaller than existing raid bdev Raid (3) 00:06:47.721 [2024-11-26 18:53:38.888873] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 835f895c-e7da-4263-aae4-d2ee5f89f73c: File exists 00:06:47.721 [2024-11-26 18:53:38.889178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:47.721 [2024-11-26 18:53:38.889351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:47.721 [2024-11-26 18:53:38.889698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:47.721 [2024-11-26 18:53:38.889937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:47.721 [2024-11-26 18:53:38.889953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:47.721 [2024-11-26 18:53:38.890326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.721 [2024-11-26 18:53:38.902449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60057 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60057 ']' 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60057 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60057 00:06:47.721 killing process with pid 60057 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60057' 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60057 00:06:47.721 18:53:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60057 00:06:47.721 [2024-11-26 18:53:38.979628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.721 [2024-11-26 18:53:38.979785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.721 [2024-11-26 18:53:38.979886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.721 [2024-11-26 18:53:38.979933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:49.097 [2024-11-26 18:53:40.285319] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.076 18:53:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:50.076 00:06:50.076 real 0m4.633s 00:06:50.076 user 0m4.970s 00:06:50.076 sys 0m0.636s 00:06:50.076 18:53:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.076 18:53:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.076 ************************************ 00:06:50.076 END TEST raid0_resize_superblock_test 00:06:50.076 ************************************ 00:06:50.076 18:53:41 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:50.076 18:53:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.076 18:53:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.076 18:53:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.076 ************************************ 00:06:50.076 START TEST raid1_resize_superblock_test 00:06:50.076 ************************************ 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60156 00:06:50.076 Process raid pid: 60156 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60156' 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60156 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60156 ']' 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.076 18:53:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.334 [2024-11-26 18:53:41.504496] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:50.334 [2024-11-26 18:53:41.504713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.591 [2024-11-26 18:53:41.699742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.591 [2024-11-26 18:53:41.858911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.848 [2024-11-26 18:53:42.073079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.848 [2024-11-26 18:53:42.073125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.413 18:53:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.413 18:53:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:51.413 18:53:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:51.414 18:53:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.414 18:53:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.982 malloc0 00:06:51.982 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.983 [2024-11-26 18:53:43.075304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:51.983 [2024-11-26 18:53:43.075396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.983 [2024-11-26 18:53:43.075434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:51.983 [2024-11-26 18:53:43.075454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.983 [2024-11-26 18:53:43.078607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.983 [2024-11-26 18:53:43.078658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:51.983 pt0 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.983 6ce1a4b4-eb3f-4dac-90ef-fa2c55da52f4 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.983 6bdf454a-f3bc-4afb-8d12-5237b5b8fe62 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.983 160e3a57-bfb4-485c-a087-7e75cc540f63 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.983 [2024-11-26 18:53:43.232598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6bdf454a-f3bc-4afb-8d12-5237b5b8fe62 is claimed 00:06:51.983 [2024-11-26 18:53:43.232734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 160e3a57-bfb4-485c-a087-7e75cc540f63 is claimed 00:06:51.983 [2024-11-26 18:53:43.233006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:51.983 [2024-11-26 18:53:43.233035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:51.983 [2024-11-26 18:53:43.233433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:51.983 [2024-11-26 18:53:43.233778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:51.983 [2024-11-26 18:53:43.233799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:51.983 [2024-11-26 18:53:43.234075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.983 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:52.243 [2024-11-26 18:53:43.348970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.243 [2024-11-26 18:53:43.404994] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.243 [2024-11-26 18:53:43.405035] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6bdf454a-f3bc-4afb-8d12-5237b5b8fe62' was resized: old size 131072, new size 204800 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.243 [2024-11-26 18:53:43.412809] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.243 [2024-11-26 18:53:43.412842] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '160e3a57-bfb4-485c-a087-7e75cc540f63' was resized: old size 131072, new size 204800 00:06:52.243 [2024-11-26 18:53:43.412935] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.243 [2024-11-26 18:53:43.524946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.243 [2024-11-26 18:53:43.572722] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:52.243 [2024-11-26 18:53:43.572834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:52.243 [2024-11-26 18:53:43.572877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:52.243 [2024-11-26 18:53:43.573123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.243 [2024-11-26 18:53:43.573429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.243 [2024-11-26 18:53:43.573553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.243 [2024-11-26 18:53:43.573580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.243 [2024-11-26 18:53:43.580550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:52.243 [2024-11-26 18:53:43.580622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.243 [2024-11-26 18:53:43.580653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:52.243 [2024-11-26 18:53:43.580674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.243 [2024-11-26 18:53:43.583690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.243 [2024-11-26 18:53:43.583744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:52.243 pt0 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.243 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.243 [2024-11-26 18:53:43.586206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6bdf454a-f3bc-4afb-8d12-5237b5b8fe62 00:06:52.243 [2024-11-26 18:53:43.586467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6bdf454a-f3bc-4afb-8d12-5237b5b8fe62 is claimed 00:06:52.243 [2024-11-26 18:53:43.586637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 160e3a57-bfb4-485c-a087-7e75cc540f63 00:06:52.243 [2024-11-26 18:53:43.586673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 160e3a57-bfb4-485c-a087-7e75cc540f63 is claimed 00:06:52.243 [2024-11-26 18:53:43.586840] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 160e3a57-bfb4-485c-a087-7e75cc540f63 (2) smaller than existing raid bdev Raid (3) 00:06:52.243 [2024-11-26 18:53:43.586876] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 6bdf454a-f3bc-4afb-8d12-5237b5b8fe62: File exists 00:06:52.243 [2024-11-26 18:53:43.586944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:52.244 [2024-11-26 18:53:43.586978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:52.244 [2024-11-26 18:53:43.587348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:52.244 [2024-11-26 18:53:43.587600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:52.244 [2024-11-26 18:53:43.587616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:52.244 [2024-11-26 18:53:43.587818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.244 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.244 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.244 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:52.244 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.244 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.244 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:52.244 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.244 [2024-11-26 18:53:43.600912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60156 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60156 ']' 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60156 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60156 00:06:52.503 killing process with pid 60156 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60156' 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60156 00:06:52.503 [2024-11-26 18:53:43.682994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.503 18:53:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60156 00:06:52.503 [2024-11-26 18:53:43.683114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.503 [2024-11-26 18:53:43.683195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.503 [2024-11-26 18:53:43.683211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:53.881 [2024-11-26 18:53:45.010666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.819 18:53:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:54.819 00:06:54.819 real 0m4.701s 00:06:54.819 user 0m5.002s 00:06:54.819 sys 0m0.683s 00:06:54.819 18:53:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.819 18:53:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.819 ************************************ 00:06:54.819 END TEST raid1_resize_superblock_test 00:06:54.819 ************************************ 00:06:54.819 18:53:46 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:54.819 18:53:46 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:54.819 18:53:46 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:54.819 18:53:46 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:54.819 18:53:46 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:54.819 18:53:46 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:54.819 18:53:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.819 18:53:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.819 18:53:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.819 ************************************ 00:06:54.819 START TEST raid_function_test_raid0 00:06:54.819 ************************************ 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60258 00:06:54.819 Process raid pid: 60258 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60258' 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60258 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60258 ']' 00:06:54.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.819 18:53:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:55.079 [2024-11-26 18:53:46.279809] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:55.079 [2024-11-26 18:53:46.280316] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.338 [2024-11-26 18:53:46.465771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.338 [2024-11-26 18:53:46.602431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.598 [2024-11-26 18:53:46.821125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.598 [2024-11-26 18:53:46.821183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.166 Base_1 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.166 Base_2 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.166 [2024-11-26 18:53:47.385076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.166 [2024-11-26 18:53:47.387582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.166 [2024-11-26 18:53:47.387835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.166 [2024-11-26 18:53:47.387866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:56.166 [2024-11-26 18:53:47.388235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:56.166 [2024-11-26 18:53:47.388440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.166 [2024-11-26 18:53:47.388456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:56.166 [2024-11-26 18:53:47.388654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.166 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:56.425 [2024-11-26 18:53:47.681314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:56.425 /dev/nbd0 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.425 1+0 records in 00:06:56.425 1+0 records out 00:06:56.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444616 s, 9.2 MB/s 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.425 18:53:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:56.993 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.994 { 00:06:56.994 "nbd_device": "/dev/nbd0", 00:06:56.994 "bdev_name": "raid" 00:06:56.994 } 00:06:56.994 ]' 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.994 { 00:06:56.994 "nbd_device": "/dev/nbd0", 00:06:56.994 "bdev_name": "raid" 00:06:56.994 } 00:06:56.994 ]' 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:56.994 4096+0 records in 00:06:56.994 4096+0 records out 00:06:56.994 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0246477 s, 85.1 MB/s 00:06:56.994 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:57.253 4096+0 records in 00:06:57.253 4096+0 records out 00:06:57.253 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.308829 s, 6.8 MB/s 00:06:57.253 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:57.253 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:57.254 128+0 records in 00:06:57.254 128+0 records out 00:06:57.254 65536 bytes (66 kB, 64 KiB) copied, 0.0010897 s, 60.1 MB/s 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:57.254 2035+0 records in 00:06:57.254 2035+0 records out 00:06:57.254 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0124778 s, 83.5 MB/s 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:57.254 456+0 records in 00:06:57.254 456+0 records out 00:06:57.254 233472 bytes (233 kB, 228 KiB) copied, 0.00360483 s, 64.8 MB/s 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.254 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:57.821 [2024-11-26 18:53:48.909474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.822 18:53:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60258 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60258 ']' 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60258 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60258 00:06:58.080 killing process with pid 60258 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60258' 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60258 00:06:58.080 18:53:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60258 00:06:58.080 [2024-11-26 18:53:49.354789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.080 [2024-11-26 18:53:49.354995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.080 [2024-11-26 18:53:49.355095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.080 [2024-11-26 18:53:49.355140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:58.338 [2024-11-26 18:53:49.558463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.713 18:53:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:59.713 00:06:59.713 real 0m4.567s 00:06:59.713 user 0m5.568s 00:06:59.713 sys 0m1.080s 00:06:59.713 18:53:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.713 ************************************ 00:06:59.713 END TEST raid_function_test_raid0 00:06:59.713 18:53:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.713 ************************************ 00:06:59.713 18:53:50 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:59.713 18:53:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.714 18:53:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.714 18:53:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.714 ************************************ 00:06:59.714 START TEST raid_function_test_concat 00:06:59.714 ************************************ 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60393 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.714 Process raid pid: 60393 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60393' 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60393 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60393 ']' 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.714 18:53:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:59.714 [2024-11-26 18:53:50.881809] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:06:59.714 [2024-11-26 18:53:50.882006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.714 [2024-11-26 18:53:51.059118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.973 [2024-11-26 18:53:51.194712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.260 [2024-11-26 18:53:51.411125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.260 [2024-11-26 18:53:51.411168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.520 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.520 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:00.520 18:53:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:00.520 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.520 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.520 Base_1 00:07:00.520 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.520 18:53:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:00.520 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.520 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.778 Base_2 00:07:00.778 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.778 18:53:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:00.778 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.778 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.778 [2024-11-26 18:53:51.936725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:00.779 [2024-11-26 18:53:51.939335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:00.779 [2024-11-26 18:53:51.939446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.779 [2024-11-26 18:53:51.939467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.779 [2024-11-26 18:53:51.939825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.779 [2024-11-26 18:53:51.940068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.779 [2024-11-26 18:53:51.940095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:00.779 [2024-11-26 18:53:51.940301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.779 18:53:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:01.119 [2024-11-26 18:53:52.276922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:01.119 /dev/nbd0 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.119 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.119 1+0 records in 00:07:01.119 1+0 records out 00:07:01.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349208 s, 11.7 MB/s 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.120 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.382 { 00:07:01.382 "nbd_device": "/dev/nbd0", 00:07:01.382 "bdev_name": "raid" 00:07:01.382 } 00:07:01.382 ]' 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.382 { 00:07:01.382 "nbd_device": "/dev/nbd0", 00:07:01.382 "bdev_name": "raid" 00:07:01.382 } 00:07:01.382 ]' 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:01.382 4096+0 records in 00:07:01.382 4096+0 records out 00:07:01.382 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0320129 s, 65.5 MB/s 00:07:01.382 18:53:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:01.950 4096+0 records in 00:07:01.950 4096+0 records out 00:07:01.950 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.344662 s, 6.1 MB/s 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:01.950 128+0 records in 00:07:01.950 128+0 records out 00:07:01.950 65536 bytes (66 kB, 64 KiB) copied, 0.00106965 s, 61.3 MB/s 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:01.950 2035+0 records in 00:07:01.950 2035+0 records out 00:07:01.950 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00663231 s, 157 MB/s 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:01.950 456+0 records in 00:07:01.950 456+0 records out 00:07:01.950 233472 bytes (233 kB, 228 KiB) copied, 0.00222023 s, 105 MB/s 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.950 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:02.209 [2024-11-26 18:53:53.431848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.209 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:02.469 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.469 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.469 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60393 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60393 ']' 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60393 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60393 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.727 killing process with pid 60393 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60393' 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60393 00:07:02.727 [2024-11-26 18:53:53.886776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.727 18:53:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60393 00:07:02.727 [2024-11-26 18:53:53.886924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.727 [2024-11-26 18:53:53.887011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.727 [2024-11-26 18:53:53.887034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:02.727 [2024-11-26 18:53:54.073561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.103 18:53:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:04.103 00:07:04.103 real 0m4.356s 00:07:04.103 user 0m5.329s 00:07:04.103 sys 0m1.013s 00:07:04.103 18:53:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.103 18:53:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:04.103 ************************************ 00:07:04.103 END TEST raid_function_test_concat 00:07:04.103 ************************************ 00:07:04.103 18:53:55 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:04.103 18:53:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.103 18:53:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.103 18:53:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.103 ************************************ 00:07:04.103 START TEST raid0_resize_test 00:07:04.103 ************************************ 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60521 00:07:04.103 Process raid pid: 60521 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60521' 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60521 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60521 ']' 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.103 18:53:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.103 [2024-11-26 18:53:55.318371] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:04.103 [2024-11-26 18:53:55.318563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.362 [2024-11-26 18:53:55.503399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.362 [2024-11-26 18:53:55.638199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.621 [2024-11-26 18:53:55.851350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.621 [2024-11-26 18:53:55.851408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.188 Base_1 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.188 Base_2 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.188 [2024-11-26 18:53:56.334484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:05.188 [2024-11-26 18:53:56.337035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:05.188 [2024-11-26 18:53:56.337115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:05.188 [2024-11-26 18:53:56.337137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:05.188 [2024-11-26 18:53:56.337458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:05.188 [2024-11-26 18:53:56.337629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:05.188 [2024-11-26 18:53:56.337644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:05.188 [2024-11-26 18:53:56.337812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.188 [2024-11-26 18:53:56.342474] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.188 [2024-11-26 18:53:56.342513] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:05.188 true 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.188 [2024-11-26 18:53:56.354710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.188 [2024-11-26 18:53:56.406542] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.188 [2024-11-26 18:53:56.406584] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:05.188 [2024-11-26 18:53:56.406625] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:05.188 true 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.188 [2024-11-26 18:53:56.418729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60521 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60521 ']' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60521 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60521 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.188 killing process with pid 60521 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60521' 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60521 00:07:05.188 18:53:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60521 00:07:05.188 [2024-11-26 18:53:56.500506] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.188 [2024-11-26 18:53:56.500639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.188 [2024-11-26 18:53:56.500712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.188 [2024-11-26 18:53:56.500727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:05.188 [2024-11-26 18:53:56.516699] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.568 18:53:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:06.568 00:07:06.568 real 0m2.394s 00:07:06.568 user 0m2.665s 00:07:06.568 sys 0m0.391s 00:07:06.568 18:53:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.568 18:53:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.568 ************************************ 00:07:06.568 END TEST raid0_resize_test 00:07:06.568 ************************************ 00:07:06.568 18:53:57 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:06.568 18:53:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.568 18:53:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.568 18:53:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.568 ************************************ 00:07:06.568 START TEST raid1_resize_test 00:07:06.568 ************************************ 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60583 00:07:06.568 Process raid pid: 60583 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60583' 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60583 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60583 ']' 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.568 18:53:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.568 [2024-11-26 18:53:57.758492] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:06.568 [2024-11-26 18:53:57.758669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.827 [2024-11-26 18:53:57.950830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.827 [2024-11-26 18:53:58.084283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.085 [2024-11-26 18:53:58.300002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.085 [2024-11-26 18:53:58.300073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.651 Base_1 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.651 Base_2 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.651 [2024-11-26 18:53:58.799430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:07.651 [2024-11-26 18:53:58.801962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:07.651 [2024-11-26 18:53:58.802050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.651 [2024-11-26 18:53:58.802071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:07.651 [2024-11-26 18:53:58.802399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:07.651 [2024-11-26 18:53:58.802577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.651 [2024-11-26 18:53:58.802594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:07.651 [2024-11-26 18:53:58.802770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.651 [2024-11-26 18:53:58.807415] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.651 [2024-11-26 18:53:58.807458] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:07.651 true 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.651 [2024-11-26 18:53:58.819613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.651 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.652 [2024-11-26 18:53:58.863462] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.652 [2024-11-26 18:53:58.863509] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:07.652 [2024-11-26 18:53:58.863550] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:07.652 true 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:07.652 [2024-11-26 18:53:58.875661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60583 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60583 ']' 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60583 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60583 00:07:07.652 killing process with pid 60583 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60583' 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60583 00:07:07.652 [2024-11-26 18:53:58.956800] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.652 18:53:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60583 00:07:07.652 [2024-11-26 18:53:58.956962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.652 [2024-11-26 18:53:58.957632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.652 [2024-11-26 18:53:58.957823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:07.652 [2024-11-26 18:53:58.973762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.026 18:54:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:09.026 00:07:09.026 real 0m2.392s 00:07:09.026 user 0m2.635s 00:07:09.026 sys 0m0.409s 00:07:09.026 18:54:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.026 18:54:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 ************************************ 00:07:09.026 END TEST raid1_resize_test 00:07:09.026 ************************************ 00:07:09.026 18:54:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:09.026 18:54:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:09.026 18:54:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:09.026 18:54:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:09.026 18:54:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.026 18:54:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 ************************************ 00:07:09.026 START TEST raid_state_function_test 00:07:09.026 ************************************ 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60645 00:07:09.026 Process raid pid: 60645 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60645' 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60645 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60645 ']' 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.026 18:54:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 [2024-11-26 18:54:00.221004] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:09.026 [2024-11-26 18:54:00.221450] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.285 [2024-11-26 18:54:00.410942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.285 [2024-11-26 18:54:00.541152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.543 [2024-11-26 18:54:00.738530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.543 [2024-11-26 18:54:00.738864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.133 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.133 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:10.133 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.133 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.133 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.133 [2024-11-26 18:54:01.247680] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.133 [2024-11-26 18:54:01.247762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.134 [2024-11-26 18:54:01.247779] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.134 [2024-11-26 18:54:01.247795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.134 "name": "Existed_Raid", 00:07:10.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.134 "strip_size_kb": 64, 00:07:10.134 "state": "configuring", 00:07:10.134 "raid_level": "raid0", 00:07:10.134 "superblock": false, 00:07:10.134 "num_base_bdevs": 2, 00:07:10.134 "num_base_bdevs_discovered": 0, 00:07:10.134 "num_base_bdevs_operational": 2, 00:07:10.134 "base_bdevs_list": [ 00:07:10.134 { 00:07:10.134 "name": "BaseBdev1", 00:07:10.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.134 "is_configured": false, 00:07:10.134 "data_offset": 0, 00:07:10.134 "data_size": 0 00:07:10.134 }, 00:07:10.134 { 00:07:10.134 "name": "BaseBdev2", 00:07:10.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.134 "is_configured": false, 00:07:10.134 "data_offset": 0, 00:07:10.134 "data_size": 0 00:07:10.134 } 00:07:10.134 ] 00:07:10.134 }' 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.134 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 [2024-11-26 18:54:01.776434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:10.700 [2024-11-26 18:54:01.776632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 [2024-11-26 18:54:01.788416] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.700 [2024-11-26 18:54:01.788499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.700 [2024-11-26 18:54:01.788532] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.700 [2024-11-26 18:54:01.788550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 [2024-11-26 18:54:01.833158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.700 BaseBdev1 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 [ 00:07:10.700 { 00:07:10.700 "name": "BaseBdev1", 00:07:10.700 "aliases": [ 00:07:10.700 "c156977b-de58-4745-abfd-c87998af318b" 00:07:10.700 ], 00:07:10.700 "product_name": "Malloc disk", 00:07:10.700 "block_size": 512, 00:07:10.700 "num_blocks": 65536, 00:07:10.700 "uuid": "c156977b-de58-4745-abfd-c87998af318b", 00:07:10.700 "assigned_rate_limits": { 00:07:10.700 "rw_ios_per_sec": 0, 00:07:10.700 "rw_mbytes_per_sec": 0, 00:07:10.700 "r_mbytes_per_sec": 0, 00:07:10.700 "w_mbytes_per_sec": 0 00:07:10.700 }, 00:07:10.700 "claimed": true, 00:07:10.700 "claim_type": "exclusive_write", 00:07:10.700 "zoned": false, 00:07:10.700 "supported_io_types": { 00:07:10.700 "read": true, 00:07:10.700 "write": true, 00:07:10.700 "unmap": true, 00:07:10.700 "flush": true, 00:07:10.700 "reset": true, 00:07:10.700 "nvme_admin": false, 00:07:10.700 "nvme_io": false, 00:07:10.700 "nvme_io_md": false, 00:07:10.700 "write_zeroes": true, 00:07:10.700 "zcopy": true, 00:07:10.700 "get_zone_info": false, 00:07:10.700 "zone_management": false, 00:07:10.700 "zone_append": false, 00:07:10.700 "compare": false, 00:07:10.700 "compare_and_write": false, 00:07:10.700 "abort": true, 00:07:10.700 "seek_hole": false, 00:07:10.700 "seek_data": false, 00:07:10.700 "copy": true, 00:07:10.700 "nvme_iov_md": false 00:07:10.700 }, 00:07:10.700 "memory_domains": [ 00:07:10.700 { 00:07:10.700 "dma_device_id": "system", 00:07:10.700 "dma_device_type": 1 00:07:10.700 }, 00:07:10.700 { 00:07:10.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.700 "dma_device_type": 2 00:07:10.700 } 00:07:10.700 ], 00:07:10.700 "driver_specific": {} 00:07:10.700 } 00:07:10.700 ] 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.700 "name": "Existed_Raid", 00:07:10.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.700 "strip_size_kb": 64, 00:07:10.700 "state": "configuring", 00:07:10.700 "raid_level": "raid0", 00:07:10.700 "superblock": false, 00:07:10.700 "num_base_bdevs": 2, 00:07:10.700 "num_base_bdevs_discovered": 1, 00:07:10.700 "num_base_bdevs_operational": 2, 00:07:10.700 "base_bdevs_list": [ 00:07:10.700 { 00:07:10.700 "name": "BaseBdev1", 00:07:10.700 "uuid": "c156977b-de58-4745-abfd-c87998af318b", 00:07:10.700 "is_configured": true, 00:07:10.700 "data_offset": 0, 00:07:10.700 "data_size": 65536 00:07:10.700 }, 00:07:10.700 { 00:07:10.700 "name": "BaseBdev2", 00:07:10.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.700 "is_configured": false, 00:07:10.700 "data_offset": 0, 00:07:10.700 "data_size": 0 00:07:10.700 } 00:07:10.700 ] 00:07:10.700 }' 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.700 18:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.267 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:11.267 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.267 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.267 [2024-11-26 18:54:02.385384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.268 [2024-11-26 18:54:02.385592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.268 [2024-11-26 18:54:02.393403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.268 [2024-11-26 18:54:02.396099] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.268 [2024-11-26 18:54:02.396171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.268 "name": "Existed_Raid", 00:07:11.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.268 "strip_size_kb": 64, 00:07:11.268 "state": "configuring", 00:07:11.268 "raid_level": "raid0", 00:07:11.268 "superblock": false, 00:07:11.268 "num_base_bdevs": 2, 00:07:11.268 "num_base_bdevs_discovered": 1, 00:07:11.268 "num_base_bdevs_operational": 2, 00:07:11.268 "base_bdevs_list": [ 00:07:11.268 { 00:07:11.268 "name": "BaseBdev1", 00:07:11.268 "uuid": "c156977b-de58-4745-abfd-c87998af318b", 00:07:11.268 "is_configured": true, 00:07:11.268 "data_offset": 0, 00:07:11.268 "data_size": 65536 00:07:11.268 }, 00:07:11.268 { 00:07:11.268 "name": "BaseBdev2", 00:07:11.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.268 "is_configured": false, 00:07:11.268 "data_offset": 0, 00:07:11.268 "data_size": 0 00:07:11.268 } 00:07:11.268 ] 00:07:11.268 }' 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.268 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.836 [2024-11-26 18:54:02.944424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:11.836 [2024-11-26 18:54:02.944491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:11.836 [2024-11-26 18:54:02.944504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:11.836 [2024-11-26 18:54:02.944834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:11.836 [2024-11-26 18:54:02.945110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:11.836 [2024-11-26 18:54:02.945132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:11.836 [2024-11-26 18:54:02.945499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.836 BaseBdev2 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:11.836 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.837 [ 00:07:11.837 { 00:07:11.837 "name": "BaseBdev2", 00:07:11.837 "aliases": [ 00:07:11.837 "8b204871-7334-41d4-b307-4a6a27467ec8" 00:07:11.837 ], 00:07:11.837 "product_name": "Malloc disk", 00:07:11.837 "block_size": 512, 00:07:11.837 "num_blocks": 65536, 00:07:11.837 "uuid": "8b204871-7334-41d4-b307-4a6a27467ec8", 00:07:11.837 "assigned_rate_limits": { 00:07:11.837 "rw_ios_per_sec": 0, 00:07:11.837 "rw_mbytes_per_sec": 0, 00:07:11.837 "r_mbytes_per_sec": 0, 00:07:11.837 "w_mbytes_per_sec": 0 00:07:11.837 }, 00:07:11.837 "claimed": true, 00:07:11.837 "claim_type": "exclusive_write", 00:07:11.837 "zoned": false, 00:07:11.837 "supported_io_types": { 00:07:11.837 "read": true, 00:07:11.837 "write": true, 00:07:11.837 "unmap": true, 00:07:11.837 "flush": true, 00:07:11.837 "reset": true, 00:07:11.837 "nvme_admin": false, 00:07:11.837 "nvme_io": false, 00:07:11.837 "nvme_io_md": false, 00:07:11.837 "write_zeroes": true, 00:07:11.837 "zcopy": true, 00:07:11.837 "get_zone_info": false, 00:07:11.837 "zone_management": false, 00:07:11.837 "zone_append": false, 00:07:11.837 "compare": false, 00:07:11.837 "compare_and_write": false, 00:07:11.837 "abort": true, 00:07:11.837 "seek_hole": false, 00:07:11.837 "seek_data": false, 00:07:11.837 "copy": true, 00:07:11.837 "nvme_iov_md": false 00:07:11.837 }, 00:07:11.837 "memory_domains": [ 00:07:11.837 { 00:07:11.837 "dma_device_id": "system", 00:07:11.837 "dma_device_type": 1 00:07:11.837 }, 00:07:11.837 { 00:07:11.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.837 "dma_device_type": 2 00:07:11.837 } 00:07:11.837 ], 00:07:11.837 "driver_specific": {} 00:07:11.837 } 00:07:11.837 ] 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.837 18:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.837 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.837 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.837 "name": "Existed_Raid", 00:07:11.837 "uuid": "7e468859-a048-491b-b8b3-f8b1c242fe89", 00:07:11.837 "strip_size_kb": 64, 00:07:11.837 "state": "online", 00:07:11.837 "raid_level": "raid0", 00:07:11.837 "superblock": false, 00:07:11.837 "num_base_bdevs": 2, 00:07:11.837 "num_base_bdevs_discovered": 2, 00:07:11.837 "num_base_bdevs_operational": 2, 00:07:11.837 "base_bdevs_list": [ 00:07:11.837 { 00:07:11.837 "name": "BaseBdev1", 00:07:11.837 "uuid": "c156977b-de58-4745-abfd-c87998af318b", 00:07:11.837 "is_configured": true, 00:07:11.837 "data_offset": 0, 00:07:11.837 "data_size": 65536 00:07:11.837 }, 00:07:11.837 { 00:07:11.837 "name": "BaseBdev2", 00:07:11.837 "uuid": "8b204871-7334-41d4-b307-4a6a27467ec8", 00:07:11.837 "is_configured": true, 00:07:11.837 "data_offset": 0, 00:07:11.837 "data_size": 65536 00:07:11.837 } 00:07:11.837 ] 00:07:11.837 }' 00:07:11.837 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.837 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.404 [2024-11-26 18:54:03.517047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:12.404 "name": "Existed_Raid", 00:07:12.404 "aliases": [ 00:07:12.404 "7e468859-a048-491b-b8b3-f8b1c242fe89" 00:07:12.404 ], 00:07:12.404 "product_name": "Raid Volume", 00:07:12.404 "block_size": 512, 00:07:12.404 "num_blocks": 131072, 00:07:12.404 "uuid": "7e468859-a048-491b-b8b3-f8b1c242fe89", 00:07:12.404 "assigned_rate_limits": { 00:07:12.404 "rw_ios_per_sec": 0, 00:07:12.404 "rw_mbytes_per_sec": 0, 00:07:12.404 "r_mbytes_per_sec": 0, 00:07:12.404 "w_mbytes_per_sec": 0 00:07:12.404 }, 00:07:12.404 "claimed": false, 00:07:12.404 "zoned": false, 00:07:12.404 "supported_io_types": { 00:07:12.404 "read": true, 00:07:12.404 "write": true, 00:07:12.404 "unmap": true, 00:07:12.404 "flush": true, 00:07:12.404 "reset": true, 00:07:12.404 "nvme_admin": false, 00:07:12.404 "nvme_io": false, 00:07:12.404 "nvme_io_md": false, 00:07:12.404 "write_zeroes": true, 00:07:12.404 "zcopy": false, 00:07:12.404 "get_zone_info": false, 00:07:12.404 "zone_management": false, 00:07:12.404 "zone_append": false, 00:07:12.404 "compare": false, 00:07:12.404 "compare_and_write": false, 00:07:12.404 "abort": false, 00:07:12.404 "seek_hole": false, 00:07:12.404 "seek_data": false, 00:07:12.404 "copy": false, 00:07:12.404 "nvme_iov_md": false 00:07:12.404 }, 00:07:12.404 "memory_domains": [ 00:07:12.404 { 00:07:12.404 "dma_device_id": "system", 00:07:12.404 "dma_device_type": 1 00:07:12.404 }, 00:07:12.404 { 00:07:12.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.404 "dma_device_type": 2 00:07:12.404 }, 00:07:12.404 { 00:07:12.404 "dma_device_id": "system", 00:07:12.404 "dma_device_type": 1 00:07:12.404 }, 00:07:12.404 { 00:07:12.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.404 "dma_device_type": 2 00:07:12.404 } 00:07:12.404 ], 00:07:12.404 "driver_specific": { 00:07:12.404 "raid": { 00:07:12.404 "uuid": "7e468859-a048-491b-b8b3-f8b1c242fe89", 00:07:12.404 "strip_size_kb": 64, 00:07:12.404 "state": "online", 00:07:12.404 "raid_level": "raid0", 00:07:12.404 "superblock": false, 00:07:12.404 "num_base_bdevs": 2, 00:07:12.404 "num_base_bdevs_discovered": 2, 00:07:12.404 "num_base_bdevs_operational": 2, 00:07:12.404 "base_bdevs_list": [ 00:07:12.404 { 00:07:12.404 "name": "BaseBdev1", 00:07:12.404 "uuid": "c156977b-de58-4745-abfd-c87998af318b", 00:07:12.404 "is_configured": true, 00:07:12.404 "data_offset": 0, 00:07:12.404 "data_size": 65536 00:07:12.404 }, 00:07:12.404 { 00:07:12.404 "name": "BaseBdev2", 00:07:12.404 "uuid": "8b204871-7334-41d4-b307-4a6a27467ec8", 00:07:12.404 "is_configured": true, 00:07:12.404 "data_offset": 0, 00:07:12.404 "data_size": 65536 00:07:12.404 } 00:07:12.404 ] 00:07:12.404 } 00:07:12.404 } 00:07:12.404 }' 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:12.404 BaseBdev2' 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.404 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.405 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.405 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.405 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:12.405 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.405 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.405 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.663 [2024-11-26 18:54:03.776790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:12.663 [2024-11-26 18:54:03.776833] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.663 [2024-11-26 18:54:03.776916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.663 "name": "Existed_Raid", 00:07:12.663 "uuid": "7e468859-a048-491b-b8b3-f8b1c242fe89", 00:07:12.663 "strip_size_kb": 64, 00:07:12.663 "state": "offline", 00:07:12.663 "raid_level": "raid0", 00:07:12.663 "superblock": false, 00:07:12.663 "num_base_bdevs": 2, 00:07:12.663 "num_base_bdevs_discovered": 1, 00:07:12.663 "num_base_bdevs_operational": 1, 00:07:12.663 "base_bdevs_list": [ 00:07:12.663 { 00:07:12.663 "name": null, 00:07:12.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.663 "is_configured": false, 00:07:12.663 "data_offset": 0, 00:07:12.663 "data_size": 65536 00:07:12.663 }, 00:07:12.663 { 00:07:12.663 "name": "BaseBdev2", 00:07:12.663 "uuid": "8b204871-7334-41d4-b307-4a6a27467ec8", 00:07:12.663 "is_configured": true, 00:07:12.663 "data_offset": 0, 00:07:12.663 "data_size": 65536 00:07:12.663 } 00:07:12.663 ] 00:07:12.663 }' 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.663 18:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 [2024-11-26 18:54:04.416055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:13.231 [2024-11-26 18:54:04.416126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60645 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60645 ']' 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60645 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.231 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60645 00:07:13.489 killing process with pid 60645 00:07:13.489 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.489 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.489 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60645' 00:07:13.489 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60645 00:07:13.489 [2024-11-26 18:54:04.597672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.489 18:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60645 00:07:13.489 [2024-11-26 18:54:04.613692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:14.431 00:07:14.431 real 0m5.588s 00:07:14.431 user 0m8.418s 00:07:14.431 sys 0m0.795s 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.431 ************************************ 00:07:14.431 END TEST raid_state_function_test 00:07:14.431 ************************************ 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.431 18:54:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:14.431 18:54:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:14.431 18:54:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.431 18:54:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.431 ************************************ 00:07:14.431 START TEST raid_state_function_test_sb 00:07:14.431 ************************************ 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:14.431 Process raid pid: 60904 00:07:14.431 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60904 00:07:14.432 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60904' 00:07:14.432 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60904 00:07:14.432 18:54:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.432 18:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60904 ']' 00:07:14.432 18:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.432 18:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.432 18:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.432 18:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.432 18:54:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.689 [2024-11-26 18:54:05.839258] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:14.689 [2024-11-26 18:54:05.839693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.689 [2024-11-26 18:54:06.014331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.947 [2024-11-26 18:54:06.149139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.204 [2024-11-26 18:54:06.357692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.204 [2024-11-26 18:54:06.358011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.770 [2024-11-26 18:54:06.910628] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.770 [2024-11-26 18:54:06.910741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.770 [2024-11-26 18:54:06.910759] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.770 [2024-11-26 18:54:06.910775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.770 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.770 "name": "Existed_Raid", 00:07:15.770 "uuid": "0b9c1b57-b22b-4f3c-bdc4-9328576bf10a", 00:07:15.770 "strip_size_kb": 64, 00:07:15.770 "state": "configuring", 00:07:15.770 "raid_level": "raid0", 00:07:15.770 "superblock": true, 00:07:15.770 "num_base_bdevs": 2, 00:07:15.770 "num_base_bdevs_discovered": 0, 00:07:15.770 "num_base_bdevs_operational": 2, 00:07:15.770 "base_bdevs_list": [ 00:07:15.770 { 00:07:15.771 "name": "BaseBdev1", 00:07:15.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.771 "is_configured": false, 00:07:15.771 "data_offset": 0, 00:07:15.771 "data_size": 0 00:07:15.771 }, 00:07:15.771 { 00:07:15.771 "name": "BaseBdev2", 00:07:15.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.771 "is_configured": false, 00:07:15.771 "data_offset": 0, 00:07:15.771 "data_size": 0 00:07:15.771 } 00:07:15.771 ] 00:07:15.771 }' 00:07:15.771 18:54:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.771 18:54:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.338 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:16.338 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.339 [2024-11-26 18:54:07.442715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:16.339 [2024-11-26 18:54:07.442914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.339 [2024-11-26 18:54:07.450677] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:16.339 [2024-11-26 18:54:07.450747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:16.339 [2024-11-26 18:54:07.450764] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.339 [2024-11-26 18:54:07.450783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.339 [2024-11-26 18:54:07.497190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.339 BaseBdev1 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.339 [ 00:07:16.339 { 00:07:16.339 "name": "BaseBdev1", 00:07:16.339 "aliases": [ 00:07:16.339 "baf4dd30-c620-4d78-99d3-58f4e4b0c5bd" 00:07:16.339 ], 00:07:16.339 "product_name": "Malloc disk", 00:07:16.339 "block_size": 512, 00:07:16.339 "num_blocks": 65536, 00:07:16.339 "uuid": "baf4dd30-c620-4d78-99d3-58f4e4b0c5bd", 00:07:16.339 "assigned_rate_limits": { 00:07:16.339 "rw_ios_per_sec": 0, 00:07:16.339 "rw_mbytes_per_sec": 0, 00:07:16.339 "r_mbytes_per_sec": 0, 00:07:16.339 "w_mbytes_per_sec": 0 00:07:16.339 }, 00:07:16.339 "claimed": true, 00:07:16.339 "claim_type": "exclusive_write", 00:07:16.339 "zoned": false, 00:07:16.339 "supported_io_types": { 00:07:16.339 "read": true, 00:07:16.339 "write": true, 00:07:16.339 "unmap": true, 00:07:16.339 "flush": true, 00:07:16.339 "reset": true, 00:07:16.339 "nvme_admin": false, 00:07:16.339 "nvme_io": false, 00:07:16.339 "nvme_io_md": false, 00:07:16.339 "write_zeroes": true, 00:07:16.339 "zcopy": true, 00:07:16.339 "get_zone_info": false, 00:07:16.339 "zone_management": false, 00:07:16.339 "zone_append": false, 00:07:16.339 "compare": false, 00:07:16.339 "compare_and_write": false, 00:07:16.339 "abort": true, 00:07:16.339 "seek_hole": false, 00:07:16.339 "seek_data": false, 00:07:16.339 "copy": true, 00:07:16.339 "nvme_iov_md": false 00:07:16.339 }, 00:07:16.339 "memory_domains": [ 00:07:16.339 { 00:07:16.339 "dma_device_id": "system", 00:07:16.339 "dma_device_type": 1 00:07:16.339 }, 00:07:16.339 { 00:07:16.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.339 "dma_device_type": 2 00:07:16.339 } 00:07:16.339 ], 00:07:16.339 "driver_specific": {} 00:07:16.339 } 00:07:16.339 ] 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.339 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.339 "name": "Existed_Raid", 00:07:16.339 "uuid": "0787a7c0-be35-4a1e-a1b2-c2bf1f66b860", 00:07:16.339 "strip_size_kb": 64, 00:07:16.339 "state": "configuring", 00:07:16.339 "raid_level": "raid0", 00:07:16.339 "superblock": true, 00:07:16.339 "num_base_bdevs": 2, 00:07:16.339 "num_base_bdevs_discovered": 1, 00:07:16.339 "num_base_bdevs_operational": 2, 00:07:16.339 "base_bdevs_list": [ 00:07:16.339 { 00:07:16.339 "name": "BaseBdev1", 00:07:16.339 "uuid": "baf4dd30-c620-4d78-99d3-58f4e4b0c5bd", 00:07:16.339 "is_configured": true, 00:07:16.339 "data_offset": 2048, 00:07:16.339 "data_size": 63488 00:07:16.339 }, 00:07:16.339 { 00:07:16.339 "name": "BaseBdev2", 00:07:16.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.339 "is_configured": false, 00:07:16.339 "data_offset": 0, 00:07:16.339 "data_size": 0 00:07:16.339 } 00:07:16.339 ] 00:07:16.339 }' 00:07:16.340 18:54:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.340 18:54:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 [2024-11-26 18:54:08.065463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:16.906 [2024-11-26 18:54:08.065529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 [2024-11-26 18:54:08.077521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.906 [2024-11-26 18:54:08.080375] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.906 [2024-11-26 18:54:08.080552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.906 "name": "Existed_Raid", 00:07:16.906 "uuid": "4528a764-67d1-465f-9fc4-0c4fb19d9b9e", 00:07:16.906 "strip_size_kb": 64, 00:07:16.906 "state": "configuring", 00:07:16.906 "raid_level": "raid0", 00:07:16.906 "superblock": true, 00:07:16.906 "num_base_bdevs": 2, 00:07:16.906 "num_base_bdevs_discovered": 1, 00:07:16.906 "num_base_bdevs_operational": 2, 00:07:16.906 "base_bdevs_list": [ 00:07:16.906 { 00:07:16.906 "name": "BaseBdev1", 00:07:16.906 "uuid": "baf4dd30-c620-4d78-99d3-58f4e4b0c5bd", 00:07:16.906 "is_configured": true, 00:07:16.906 "data_offset": 2048, 00:07:16.906 "data_size": 63488 00:07:16.906 }, 00:07:16.906 { 00:07:16.906 "name": "BaseBdev2", 00:07:16.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.906 "is_configured": false, 00:07:16.906 "data_offset": 0, 00:07:16.906 "data_size": 0 00:07:16.906 } 00:07:16.906 ] 00:07:16.906 }' 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.906 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.473 [2024-11-26 18:54:08.631495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.473 [2024-11-26 18:54:08.631854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:17.473 [2024-11-26 18:54:08.631872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:17.473 BaseBdev2 00:07:17.473 [2024-11-26 18:54:08.632265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:17.473 [2024-11-26 18:54:08.632467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:17.473 [2024-11-26 18:54:08.632492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:17.473 [2024-11-26 18:54:08.632673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.473 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.473 [ 00:07:17.473 { 00:07:17.473 "name": "BaseBdev2", 00:07:17.473 "aliases": [ 00:07:17.473 "2badccec-0396-436e-b592-5390b513e68a" 00:07:17.473 ], 00:07:17.473 "product_name": "Malloc disk", 00:07:17.473 "block_size": 512, 00:07:17.473 "num_blocks": 65536, 00:07:17.473 "uuid": "2badccec-0396-436e-b592-5390b513e68a", 00:07:17.473 "assigned_rate_limits": { 00:07:17.473 "rw_ios_per_sec": 0, 00:07:17.473 "rw_mbytes_per_sec": 0, 00:07:17.473 "r_mbytes_per_sec": 0, 00:07:17.473 "w_mbytes_per_sec": 0 00:07:17.473 }, 00:07:17.473 "claimed": true, 00:07:17.473 "claim_type": "exclusive_write", 00:07:17.473 "zoned": false, 00:07:17.473 "supported_io_types": { 00:07:17.473 "read": true, 00:07:17.473 "write": true, 00:07:17.473 "unmap": true, 00:07:17.473 "flush": true, 00:07:17.473 "reset": true, 00:07:17.473 "nvme_admin": false, 00:07:17.473 "nvme_io": false, 00:07:17.474 "nvme_io_md": false, 00:07:17.474 "write_zeroes": true, 00:07:17.474 "zcopy": true, 00:07:17.474 "get_zone_info": false, 00:07:17.474 "zone_management": false, 00:07:17.474 "zone_append": false, 00:07:17.474 "compare": false, 00:07:17.474 "compare_and_write": false, 00:07:17.474 "abort": true, 00:07:17.474 "seek_hole": false, 00:07:17.474 "seek_data": false, 00:07:17.474 "copy": true, 00:07:17.474 "nvme_iov_md": false 00:07:17.474 }, 00:07:17.474 "memory_domains": [ 00:07:17.474 { 00:07:17.474 "dma_device_id": "system", 00:07:17.474 "dma_device_type": 1 00:07:17.474 }, 00:07:17.474 { 00:07:17.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.474 "dma_device_type": 2 00:07:17.474 } 00:07:17.474 ], 00:07:17.474 "driver_specific": {} 00:07:17.474 } 00:07:17.474 ] 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.474 "name": "Existed_Raid", 00:07:17.474 "uuid": "4528a764-67d1-465f-9fc4-0c4fb19d9b9e", 00:07:17.474 "strip_size_kb": 64, 00:07:17.474 "state": "online", 00:07:17.474 "raid_level": "raid0", 00:07:17.474 "superblock": true, 00:07:17.474 "num_base_bdevs": 2, 00:07:17.474 "num_base_bdevs_discovered": 2, 00:07:17.474 "num_base_bdevs_operational": 2, 00:07:17.474 "base_bdevs_list": [ 00:07:17.474 { 00:07:17.474 "name": "BaseBdev1", 00:07:17.474 "uuid": "baf4dd30-c620-4d78-99d3-58f4e4b0c5bd", 00:07:17.474 "is_configured": true, 00:07:17.474 "data_offset": 2048, 00:07:17.474 "data_size": 63488 00:07:17.474 }, 00:07:17.474 { 00:07:17.474 "name": "BaseBdev2", 00:07:17.474 "uuid": "2badccec-0396-436e-b592-5390b513e68a", 00:07:17.474 "is_configured": true, 00:07:17.474 "data_offset": 2048, 00:07:17.474 "data_size": 63488 00:07:17.474 } 00:07:17.474 ] 00:07:17.474 }' 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.474 18:54:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.044 [2024-11-26 18:54:09.184216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:18.044 "name": "Existed_Raid", 00:07:18.044 "aliases": [ 00:07:18.044 "4528a764-67d1-465f-9fc4-0c4fb19d9b9e" 00:07:18.044 ], 00:07:18.044 "product_name": "Raid Volume", 00:07:18.044 "block_size": 512, 00:07:18.044 "num_blocks": 126976, 00:07:18.044 "uuid": "4528a764-67d1-465f-9fc4-0c4fb19d9b9e", 00:07:18.044 "assigned_rate_limits": { 00:07:18.044 "rw_ios_per_sec": 0, 00:07:18.044 "rw_mbytes_per_sec": 0, 00:07:18.044 "r_mbytes_per_sec": 0, 00:07:18.044 "w_mbytes_per_sec": 0 00:07:18.044 }, 00:07:18.044 "claimed": false, 00:07:18.044 "zoned": false, 00:07:18.044 "supported_io_types": { 00:07:18.044 "read": true, 00:07:18.044 "write": true, 00:07:18.044 "unmap": true, 00:07:18.044 "flush": true, 00:07:18.044 "reset": true, 00:07:18.044 "nvme_admin": false, 00:07:18.044 "nvme_io": false, 00:07:18.044 "nvme_io_md": false, 00:07:18.044 "write_zeroes": true, 00:07:18.044 "zcopy": false, 00:07:18.044 "get_zone_info": false, 00:07:18.044 "zone_management": false, 00:07:18.044 "zone_append": false, 00:07:18.044 "compare": false, 00:07:18.044 "compare_and_write": false, 00:07:18.044 "abort": false, 00:07:18.044 "seek_hole": false, 00:07:18.044 "seek_data": false, 00:07:18.044 "copy": false, 00:07:18.044 "nvme_iov_md": false 00:07:18.044 }, 00:07:18.044 "memory_domains": [ 00:07:18.044 { 00:07:18.044 "dma_device_id": "system", 00:07:18.044 "dma_device_type": 1 00:07:18.044 }, 00:07:18.044 { 00:07:18.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.044 "dma_device_type": 2 00:07:18.044 }, 00:07:18.044 { 00:07:18.044 "dma_device_id": "system", 00:07:18.044 "dma_device_type": 1 00:07:18.044 }, 00:07:18.044 { 00:07:18.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.044 "dma_device_type": 2 00:07:18.044 } 00:07:18.044 ], 00:07:18.044 "driver_specific": { 00:07:18.044 "raid": { 00:07:18.044 "uuid": "4528a764-67d1-465f-9fc4-0c4fb19d9b9e", 00:07:18.044 "strip_size_kb": 64, 00:07:18.044 "state": "online", 00:07:18.044 "raid_level": "raid0", 00:07:18.044 "superblock": true, 00:07:18.044 "num_base_bdevs": 2, 00:07:18.044 "num_base_bdevs_discovered": 2, 00:07:18.044 "num_base_bdevs_operational": 2, 00:07:18.044 "base_bdevs_list": [ 00:07:18.044 { 00:07:18.044 "name": "BaseBdev1", 00:07:18.044 "uuid": "baf4dd30-c620-4d78-99d3-58f4e4b0c5bd", 00:07:18.044 "is_configured": true, 00:07:18.044 "data_offset": 2048, 00:07:18.044 "data_size": 63488 00:07:18.044 }, 00:07:18.044 { 00:07:18.044 "name": "BaseBdev2", 00:07:18.044 "uuid": "2badccec-0396-436e-b592-5390b513e68a", 00:07:18.044 "is_configured": true, 00:07:18.044 "data_offset": 2048, 00:07:18.044 "data_size": 63488 00:07:18.044 } 00:07:18.044 ] 00:07:18.044 } 00:07:18.044 } 00:07:18.044 }' 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:18.044 BaseBdev2' 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.044 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.302 [2024-11-26 18:54:09.456063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:18.302 [2024-11-26 18:54:09.456109] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.302 [2024-11-26 18:54:09.456180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.302 "name": "Existed_Raid", 00:07:18.302 "uuid": "4528a764-67d1-465f-9fc4-0c4fb19d9b9e", 00:07:18.302 "strip_size_kb": 64, 00:07:18.302 "state": "offline", 00:07:18.302 "raid_level": "raid0", 00:07:18.302 "superblock": true, 00:07:18.302 "num_base_bdevs": 2, 00:07:18.302 "num_base_bdevs_discovered": 1, 00:07:18.302 "num_base_bdevs_operational": 1, 00:07:18.302 "base_bdevs_list": [ 00:07:18.302 { 00:07:18.302 "name": null, 00:07:18.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.302 "is_configured": false, 00:07:18.302 "data_offset": 0, 00:07:18.302 "data_size": 63488 00:07:18.302 }, 00:07:18.302 { 00:07:18.302 "name": "BaseBdev2", 00:07:18.302 "uuid": "2badccec-0396-436e-b592-5390b513e68a", 00:07:18.302 "is_configured": true, 00:07:18.302 "data_offset": 2048, 00:07:18.302 "data_size": 63488 00:07:18.302 } 00:07:18.302 ] 00:07:18.302 }' 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.302 18:54:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.868 [2024-11-26 18:54:10.132433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:18.868 [2024-11-26 18:54:10.132506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.868 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60904 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60904 ']' 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60904 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60904 00:07:19.126 killing process with pid 60904 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.126 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60904' 00:07:19.127 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60904 00:07:19.127 [2024-11-26 18:54:10.311978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.127 18:54:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60904 00:07:19.127 [2024-11-26 18:54:10.326355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.059 ************************************ 00:07:20.059 END TEST raid_state_function_test_sb 00:07:20.059 ************************************ 00:07:20.059 18:54:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:20.059 00:07:20.059 real 0m5.638s 00:07:20.059 user 0m8.587s 00:07:20.059 sys 0m0.764s 00:07:20.059 18:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.059 18:54:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.317 18:54:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:20.317 18:54:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:20.317 18:54:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.317 18:54:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.317 ************************************ 00:07:20.317 START TEST raid_superblock_test 00:07:20.317 ************************************ 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61158 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61158 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61158 ']' 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.317 18:54:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.317 [2024-11-26 18:54:11.540881] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:20.317 [2024-11-26 18:54:11.541435] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61158 ] 00:07:20.575 [2024-11-26 18:54:11.722634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.575 [2024-11-26 18:54:11.856273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.833 [2024-11-26 18:54:12.063706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.833 [2024-11-26 18:54:12.064102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.401 malloc1 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.401 [2024-11-26 18:54:12.627119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:21.401 [2024-11-26 18:54:12.627192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.401 [2024-11-26 18:54:12.627225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:21.401 [2024-11-26 18:54:12.627240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.401 [2024-11-26 18:54:12.630519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.401 [2024-11-26 18:54:12.630570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:21.401 pt1 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.401 malloc2 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:21.401 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.402 [2024-11-26 18:54:12.683143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:21.402 [2024-11-26 18:54:12.683370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.402 [2024-11-26 18:54:12.683458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:21.402 [2024-11-26 18:54:12.683609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.402 [2024-11-26 18:54:12.686569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.402 [2024-11-26 18:54:12.686744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:21.402 pt2 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.402 [2024-11-26 18:54:12.695290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:21.402 [2024-11-26 18:54:12.698224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:21.402 [2024-11-26 18:54:12.698494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:21.402 [2024-11-26 18:54:12.698513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.402 [2024-11-26 18:54:12.698885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:21.402 [2024-11-26 18:54:12.699316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:21.402 [2024-11-26 18:54:12.699459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:21.402 [2024-11-26 18:54:12.700033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.402 "name": "raid_bdev1", 00:07:21.402 "uuid": "855c8a2f-3a01-4e78-8033-8cd633a2d2b0", 00:07:21.402 "strip_size_kb": 64, 00:07:21.402 "state": "online", 00:07:21.402 "raid_level": "raid0", 00:07:21.402 "superblock": true, 00:07:21.402 "num_base_bdevs": 2, 00:07:21.402 "num_base_bdevs_discovered": 2, 00:07:21.402 "num_base_bdevs_operational": 2, 00:07:21.402 "base_bdevs_list": [ 00:07:21.402 { 00:07:21.402 "name": "pt1", 00:07:21.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.402 "is_configured": true, 00:07:21.402 "data_offset": 2048, 00:07:21.402 "data_size": 63488 00:07:21.402 }, 00:07:21.402 { 00:07:21.402 "name": "pt2", 00:07:21.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.402 "is_configured": true, 00:07:21.402 "data_offset": 2048, 00:07:21.402 "data_size": 63488 00:07:21.402 } 00:07:21.402 ] 00:07:21.402 }' 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.402 18:54:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.968 [2024-11-26 18:54:13.208428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.968 "name": "raid_bdev1", 00:07:21.968 "aliases": [ 00:07:21.968 "855c8a2f-3a01-4e78-8033-8cd633a2d2b0" 00:07:21.968 ], 00:07:21.968 "product_name": "Raid Volume", 00:07:21.968 "block_size": 512, 00:07:21.968 "num_blocks": 126976, 00:07:21.968 "uuid": "855c8a2f-3a01-4e78-8033-8cd633a2d2b0", 00:07:21.968 "assigned_rate_limits": { 00:07:21.968 "rw_ios_per_sec": 0, 00:07:21.968 "rw_mbytes_per_sec": 0, 00:07:21.968 "r_mbytes_per_sec": 0, 00:07:21.968 "w_mbytes_per_sec": 0 00:07:21.968 }, 00:07:21.968 "claimed": false, 00:07:21.968 "zoned": false, 00:07:21.968 "supported_io_types": { 00:07:21.968 "read": true, 00:07:21.968 "write": true, 00:07:21.968 "unmap": true, 00:07:21.968 "flush": true, 00:07:21.968 "reset": true, 00:07:21.968 "nvme_admin": false, 00:07:21.968 "nvme_io": false, 00:07:21.968 "nvme_io_md": false, 00:07:21.968 "write_zeroes": true, 00:07:21.968 "zcopy": false, 00:07:21.968 "get_zone_info": false, 00:07:21.968 "zone_management": false, 00:07:21.968 "zone_append": false, 00:07:21.968 "compare": false, 00:07:21.968 "compare_and_write": false, 00:07:21.968 "abort": false, 00:07:21.968 "seek_hole": false, 00:07:21.968 "seek_data": false, 00:07:21.968 "copy": false, 00:07:21.968 "nvme_iov_md": false 00:07:21.968 }, 00:07:21.968 "memory_domains": [ 00:07:21.968 { 00:07:21.968 "dma_device_id": "system", 00:07:21.968 "dma_device_type": 1 00:07:21.968 }, 00:07:21.968 { 00:07:21.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.968 "dma_device_type": 2 00:07:21.968 }, 00:07:21.968 { 00:07:21.968 "dma_device_id": "system", 00:07:21.968 "dma_device_type": 1 00:07:21.968 }, 00:07:21.968 { 00:07:21.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.968 "dma_device_type": 2 00:07:21.968 } 00:07:21.968 ], 00:07:21.968 "driver_specific": { 00:07:21.968 "raid": { 00:07:21.968 "uuid": "855c8a2f-3a01-4e78-8033-8cd633a2d2b0", 00:07:21.968 "strip_size_kb": 64, 00:07:21.968 "state": "online", 00:07:21.968 "raid_level": "raid0", 00:07:21.968 "superblock": true, 00:07:21.968 "num_base_bdevs": 2, 00:07:21.968 "num_base_bdevs_discovered": 2, 00:07:21.968 "num_base_bdevs_operational": 2, 00:07:21.968 "base_bdevs_list": [ 00:07:21.968 { 00:07:21.968 "name": "pt1", 00:07:21.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.968 "is_configured": true, 00:07:21.968 "data_offset": 2048, 00:07:21.968 "data_size": 63488 00:07:21.968 }, 00:07:21.968 { 00:07:21.968 "name": "pt2", 00:07:21.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.968 "is_configured": true, 00:07:21.968 "data_offset": 2048, 00:07:21.968 "data_size": 63488 00:07:21.968 } 00:07:21.968 ] 00:07:21.968 } 00:07:21.968 } 00:07:21.968 }' 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:21.968 pt2' 00:07:21.968 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.226 [2024-11-26 18:54:13.464551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=855c8a2f-3a01-4e78-8033-8cd633a2d2b0 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 855c8a2f-3a01-4e78-8033-8cd633a2d2b0 ']' 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.226 [2024-11-26 18:54:13.520156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.226 [2024-11-26 18:54:13.520192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.226 [2024-11-26 18:54:13.520317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.226 [2024-11-26 18:54:13.520388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.226 [2024-11-26 18:54:13.520408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:22.226 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.227 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.485 [2024-11-26 18:54:13.656333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:22.485 [2024-11-26 18:54:13.659100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:22.485 [2024-11-26 18:54:13.659207] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:22.485 [2024-11-26 18:54:13.659284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:22.485 [2024-11-26 18:54:13.659311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.485 [2024-11-26 18:54:13.659331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:22.485 request: 00:07:22.485 { 00:07:22.485 "name": "raid_bdev1", 00:07:22.485 "raid_level": "raid0", 00:07:22.485 "base_bdevs": [ 00:07:22.485 "malloc1", 00:07:22.485 "malloc2" 00:07:22.485 ], 00:07:22.485 "strip_size_kb": 64, 00:07:22.485 "superblock": false, 00:07:22.485 "method": "bdev_raid_create", 00:07:22.485 "req_id": 1 00:07:22.485 } 00:07:22.485 Got JSON-RPC error response 00:07:22.485 response: 00:07:22.485 { 00:07:22.485 "code": -17, 00:07:22.485 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:22.485 } 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.485 [2024-11-26 18:54:13.728387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:22.485 [2024-11-26 18:54:13.728489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.485 [2024-11-26 18:54:13.728516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:22.485 [2024-11-26 18:54:13.728533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.485 [2024-11-26 18:54:13.731737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.485 [2024-11-26 18:54:13.731798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:22.485 [2024-11-26 18:54:13.731953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:22.485 [2024-11-26 18:54:13.732030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:22.485 pt1 00:07:22.485 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.486 "name": "raid_bdev1", 00:07:22.486 "uuid": "855c8a2f-3a01-4e78-8033-8cd633a2d2b0", 00:07:22.486 "strip_size_kb": 64, 00:07:22.486 "state": "configuring", 00:07:22.486 "raid_level": "raid0", 00:07:22.486 "superblock": true, 00:07:22.486 "num_base_bdevs": 2, 00:07:22.486 "num_base_bdevs_discovered": 1, 00:07:22.486 "num_base_bdevs_operational": 2, 00:07:22.486 "base_bdevs_list": [ 00:07:22.486 { 00:07:22.486 "name": "pt1", 00:07:22.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.486 "is_configured": true, 00:07:22.486 "data_offset": 2048, 00:07:22.486 "data_size": 63488 00:07:22.486 }, 00:07:22.486 { 00:07:22.486 "name": null, 00:07:22.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.486 "is_configured": false, 00:07:22.486 "data_offset": 2048, 00:07:22.486 "data_size": 63488 00:07:22.486 } 00:07:22.486 ] 00:07:22.486 }' 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.486 18:54:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.054 [2024-11-26 18:54:14.248554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:23.054 [2024-11-26 18:54:14.248696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.054 [2024-11-26 18:54:14.248736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:23.054 [2024-11-26 18:54:14.248754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.054 [2024-11-26 18:54:14.249430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.054 [2024-11-26 18:54:14.249469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:23.054 [2024-11-26 18:54:14.249581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:23.054 [2024-11-26 18:54:14.249632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:23.054 [2024-11-26 18:54:14.249790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.054 [2024-11-26 18:54:14.249812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.054 [2024-11-26 18:54:14.250141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:23.054 [2024-11-26 18:54:14.250329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.054 [2024-11-26 18:54:14.250344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:23.054 [2024-11-26 18:54:14.250521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.054 pt2 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.054 "name": "raid_bdev1", 00:07:23.054 "uuid": "855c8a2f-3a01-4e78-8033-8cd633a2d2b0", 00:07:23.054 "strip_size_kb": 64, 00:07:23.054 "state": "online", 00:07:23.054 "raid_level": "raid0", 00:07:23.054 "superblock": true, 00:07:23.054 "num_base_bdevs": 2, 00:07:23.054 "num_base_bdevs_discovered": 2, 00:07:23.054 "num_base_bdevs_operational": 2, 00:07:23.054 "base_bdevs_list": [ 00:07:23.054 { 00:07:23.054 "name": "pt1", 00:07:23.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:23.054 "is_configured": true, 00:07:23.054 "data_offset": 2048, 00:07:23.054 "data_size": 63488 00:07:23.054 }, 00:07:23.054 { 00:07:23.054 "name": "pt2", 00:07:23.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:23.054 "is_configured": true, 00:07:23.054 "data_offset": 2048, 00:07:23.054 "data_size": 63488 00:07:23.054 } 00:07:23.054 ] 00:07:23.054 }' 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.054 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.622 [2024-11-26 18:54:14.797052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.622 "name": "raid_bdev1", 00:07:23.622 "aliases": [ 00:07:23.622 "855c8a2f-3a01-4e78-8033-8cd633a2d2b0" 00:07:23.622 ], 00:07:23.622 "product_name": "Raid Volume", 00:07:23.622 "block_size": 512, 00:07:23.622 "num_blocks": 126976, 00:07:23.622 "uuid": "855c8a2f-3a01-4e78-8033-8cd633a2d2b0", 00:07:23.622 "assigned_rate_limits": { 00:07:23.622 "rw_ios_per_sec": 0, 00:07:23.622 "rw_mbytes_per_sec": 0, 00:07:23.622 "r_mbytes_per_sec": 0, 00:07:23.622 "w_mbytes_per_sec": 0 00:07:23.622 }, 00:07:23.622 "claimed": false, 00:07:23.622 "zoned": false, 00:07:23.622 "supported_io_types": { 00:07:23.622 "read": true, 00:07:23.622 "write": true, 00:07:23.622 "unmap": true, 00:07:23.622 "flush": true, 00:07:23.622 "reset": true, 00:07:23.622 "nvme_admin": false, 00:07:23.622 "nvme_io": false, 00:07:23.622 "nvme_io_md": false, 00:07:23.622 "write_zeroes": true, 00:07:23.622 "zcopy": false, 00:07:23.622 "get_zone_info": false, 00:07:23.622 "zone_management": false, 00:07:23.622 "zone_append": false, 00:07:23.622 "compare": false, 00:07:23.622 "compare_and_write": false, 00:07:23.622 "abort": false, 00:07:23.622 "seek_hole": false, 00:07:23.622 "seek_data": false, 00:07:23.622 "copy": false, 00:07:23.622 "nvme_iov_md": false 00:07:23.622 }, 00:07:23.622 "memory_domains": [ 00:07:23.622 { 00:07:23.622 "dma_device_id": "system", 00:07:23.622 "dma_device_type": 1 00:07:23.622 }, 00:07:23.622 { 00:07:23.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.622 "dma_device_type": 2 00:07:23.622 }, 00:07:23.622 { 00:07:23.622 "dma_device_id": "system", 00:07:23.622 "dma_device_type": 1 00:07:23.622 }, 00:07:23.622 { 00:07:23.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.622 "dma_device_type": 2 00:07:23.622 } 00:07:23.622 ], 00:07:23.622 "driver_specific": { 00:07:23.622 "raid": { 00:07:23.622 "uuid": "855c8a2f-3a01-4e78-8033-8cd633a2d2b0", 00:07:23.622 "strip_size_kb": 64, 00:07:23.622 "state": "online", 00:07:23.622 "raid_level": "raid0", 00:07:23.622 "superblock": true, 00:07:23.622 "num_base_bdevs": 2, 00:07:23.622 "num_base_bdevs_discovered": 2, 00:07:23.622 "num_base_bdevs_operational": 2, 00:07:23.622 "base_bdevs_list": [ 00:07:23.622 { 00:07:23.622 "name": "pt1", 00:07:23.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:23.622 "is_configured": true, 00:07:23.622 "data_offset": 2048, 00:07:23.622 "data_size": 63488 00:07:23.622 }, 00:07:23.622 { 00:07:23.622 "name": "pt2", 00:07:23.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:23.622 "is_configured": true, 00:07:23.622 "data_offset": 2048, 00:07:23.622 "data_size": 63488 00:07:23.622 } 00:07:23.622 ] 00:07:23.622 } 00:07:23.622 } 00:07:23.622 }' 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:23.622 pt2' 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.622 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.940 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.940 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.940 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.940 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:23.940 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.940 18:54:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.940 18:54:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.940 [2024-11-26 18:54:15.053147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 855c8a2f-3a01-4e78-8033-8cd633a2d2b0 '!=' 855c8a2f-3a01-4e78-8033-8cd633a2d2b0 ']' 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61158 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61158 ']' 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61158 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61158 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61158' 00:07:23.940 killing process with pid 61158 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61158 00:07:23.940 [2024-11-26 18:54:15.136919] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.940 18:54:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61158 00:07:23.940 [2024-11-26 18:54:15.137069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.940 [2024-11-26 18:54:15.137174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.940 [2024-11-26 18:54:15.137203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:24.198 [2024-11-26 18:54:15.325260] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.133 18:54:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:25.133 00:07:25.133 real 0m4.907s 00:07:25.133 user 0m7.266s 00:07:25.133 sys 0m0.701s 00:07:25.133 18:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.133 ************************************ 00:07:25.133 END TEST raid_superblock_test 00:07:25.133 ************************************ 00:07:25.133 18:54:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.133 18:54:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:25.133 18:54:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:25.133 18:54:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.133 18:54:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.133 ************************************ 00:07:25.133 START TEST raid_read_error_test 00:07:25.133 ************************************ 00:07:25.133 18:54:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:25.133 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ngAOIpM3fI 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61375 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61375 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61375 ']' 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.134 18:54:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.394 [2024-11-26 18:54:16.526334] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:25.394 [2024-11-26 18:54:16.526530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61375 ] 00:07:25.394 [2024-11-26 18:54:16.711045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.652 [2024-11-26 18:54:16.850833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.910 [2024-11-26 18:54:17.063359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.910 [2024-11-26 18:54:17.063440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.476 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.476 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:26.476 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:26.476 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.477 BaseBdev1_malloc 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.477 true 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.477 [2024-11-26 18:54:17.618034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:26.477 [2024-11-26 18:54:17.618130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.477 [2024-11-26 18:54:17.618159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:26.477 [2024-11-26 18:54:17.618176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.477 [2024-11-26 18:54:17.621146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.477 [2024-11-26 18:54:17.621211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:26.477 BaseBdev1 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.477 BaseBdev2_malloc 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.477 true 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.477 [2024-11-26 18:54:17.676802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:26.477 [2024-11-26 18:54:17.676894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.477 [2024-11-26 18:54:17.676946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:26.477 [2024-11-26 18:54:17.676973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.477 [2024-11-26 18:54:17.679897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.477 [2024-11-26 18:54:17.680023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:26.477 BaseBdev2 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.477 [2024-11-26 18:54:17.684885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.477 [2024-11-26 18:54:17.687403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.477 [2024-11-26 18:54:17.687757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.477 [2024-11-26 18:54:17.687789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.477 [2024-11-26 18:54:17.688125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:26.477 [2024-11-26 18:54:17.688405] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.477 [2024-11-26 18:54:17.688427] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:26.477 [2024-11-26 18:54:17.688675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.477 "name": "raid_bdev1", 00:07:26.477 "uuid": "b4cb6c14-f204-478a-a7ca-5b1c40aa9c0b", 00:07:26.477 "strip_size_kb": 64, 00:07:26.477 "state": "online", 00:07:26.477 "raid_level": "raid0", 00:07:26.477 "superblock": true, 00:07:26.477 "num_base_bdevs": 2, 00:07:26.477 "num_base_bdevs_discovered": 2, 00:07:26.477 "num_base_bdevs_operational": 2, 00:07:26.477 "base_bdevs_list": [ 00:07:26.477 { 00:07:26.477 "name": "BaseBdev1", 00:07:26.477 "uuid": "e99a65ba-d846-572d-9b59-a90e0059f982", 00:07:26.477 "is_configured": true, 00:07:26.477 "data_offset": 2048, 00:07:26.477 "data_size": 63488 00:07:26.477 }, 00:07:26.477 { 00:07:26.477 "name": "BaseBdev2", 00:07:26.477 "uuid": "0a33390a-13e2-59a6-9e7b-b2a1c23f9ed3", 00:07:26.477 "is_configured": true, 00:07:26.477 "data_offset": 2048, 00:07:26.477 "data_size": 63488 00:07:26.477 } 00:07:26.477 ] 00:07:26.477 }' 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.477 18:54:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.046 18:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:27.046 18:54:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:27.046 [2024-11-26 18:54:18.294529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.112 "name": "raid_bdev1", 00:07:28.112 "uuid": "b4cb6c14-f204-478a-a7ca-5b1c40aa9c0b", 00:07:28.112 "strip_size_kb": 64, 00:07:28.112 "state": "online", 00:07:28.112 "raid_level": "raid0", 00:07:28.112 "superblock": true, 00:07:28.112 "num_base_bdevs": 2, 00:07:28.112 "num_base_bdevs_discovered": 2, 00:07:28.112 "num_base_bdevs_operational": 2, 00:07:28.112 "base_bdevs_list": [ 00:07:28.112 { 00:07:28.112 "name": "BaseBdev1", 00:07:28.112 "uuid": "e99a65ba-d846-572d-9b59-a90e0059f982", 00:07:28.112 "is_configured": true, 00:07:28.112 "data_offset": 2048, 00:07:28.112 "data_size": 63488 00:07:28.112 }, 00:07:28.112 { 00:07:28.112 "name": "BaseBdev2", 00:07:28.112 "uuid": "0a33390a-13e2-59a6-9e7b-b2a1c23f9ed3", 00:07:28.112 "is_configured": true, 00:07:28.112 "data_offset": 2048, 00:07:28.112 "data_size": 63488 00:07:28.112 } 00:07:28.112 ] 00:07:28.112 }' 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.112 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.372 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:28.372 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.372 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 [2024-11-26 18:54:19.737286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:28.632 [2024-11-26 18:54:19.737343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.632 { 00:07:28.632 "results": [ 00:07:28.632 { 00:07:28.632 "job": "raid_bdev1", 00:07:28.632 "core_mask": "0x1", 00:07:28.632 "workload": "randrw", 00:07:28.632 "percentage": 50, 00:07:28.632 "status": "finished", 00:07:28.632 "queue_depth": 1, 00:07:28.632 "io_size": 131072, 00:07:28.632 "runtime": 1.440168, 00:07:28.632 "iops": 10493.220235416979, 00:07:28.632 "mibps": 1311.6525294271223, 00:07:28.632 "io_failed": 1, 00:07:28.632 "io_timeout": 0, 00:07:28.632 "avg_latency_us": 133.57095625078952, 00:07:28.632 "min_latency_us": 38.4, 00:07:28.632 "max_latency_us": 1980.9745454545455 00:07:28.632 } 00:07:28.632 ], 00:07:28.632 "core_count": 1 00:07:28.632 } 00:07:28.632 [2024-11-26 18:54:19.741280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.632 [2024-11-26 18:54:19.741400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.632 [2024-11-26 18:54:19.741475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.632 [2024-11-26 18:54:19.741508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61375 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61375 ']' 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61375 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61375 00:07:28.632 killing process with pid 61375 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61375' 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61375 00:07:28.632 [2024-11-26 18:54:19.783771] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.632 18:54:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61375 00:07:28.632 [2024-11-26 18:54:19.903498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ngAOIpM3fI 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:07:30.009 00:07:30.009 real 0m4.621s 00:07:30.009 user 0m5.808s 00:07:30.009 sys 0m0.551s 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.009 ************************************ 00:07:30.009 END TEST raid_read_error_test 00:07:30.009 ************************************ 00:07:30.009 18:54:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.009 18:54:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:30.009 18:54:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:30.009 18:54:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.009 18:54:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.009 ************************************ 00:07:30.009 START TEST raid_write_error_test 00:07:30.009 ************************************ 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ifwvy7S6iv 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61525 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61525 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61525 ']' 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.009 18:54:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.009 [2024-11-26 18:54:21.193861] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:30.009 [2024-11-26 18:54:21.194069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61525 ] 00:07:30.268 [2024-11-26 18:54:21.382413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.268 [2024-11-26 18:54:21.534941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.572 [2024-11-26 18:54:21.733066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.572 [2024-11-26 18:54:21.733398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.834 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.834 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:30.834 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:30.834 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:30.834 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.834 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.095 BaseBdev1_malloc 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.095 true 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.095 [2024-11-26 18:54:22.231361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:31.095 [2024-11-26 18:54:22.231593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.095 [2024-11-26 18:54:22.231633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:31.095 [2024-11-26 18:54:22.231653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.095 [2024-11-26 18:54:22.234628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.095 [2024-11-26 18:54:22.234867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:31.095 BaseBdev1 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.095 BaseBdev2_malloc 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.095 true 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.095 [2024-11-26 18:54:22.292469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:31.095 [2024-11-26 18:54:22.292690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.095 [2024-11-26 18:54:22.292726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:31.095 [2024-11-26 18:54:22.292746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.095 [2024-11-26 18:54:22.295612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.095 [2024-11-26 18:54:22.295662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:31.095 BaseBdev2 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.095 [2024-11-26 18:54:22.300651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.095 [2024-11-26 18:54:22.303148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:31.095 [2024-11-26 18:54:22.303538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.095 [2024-11-26 18:54:22.303570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.095 [2024-11-26 18:54:22.303880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:31.095 [2024-11-26 18:54:22.304290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.095 [2024-11-26 18:54:22.304350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:31.095 [2024-11-26 18:54:22.304707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.095 "name": "raid_bdev1", 00:07:31.095 "uuid": "089dff6b-e5d7-41a2-8718-406669f40f19", 00:07:31.095 "strip_size_kb": 64, 00:07:31.095 "state": "online", 00:07:31.095 "raid_level": "raid0", 00:07:31.095 "superblock": true, 00:07:31.095 "num_base_bdevs": 2, 00:07:31.095 "num_base_bdevs_discovered": 2, 00:07:31.095 "num_base_bdevs_operational": 2, 00:07:31.095 "base_bdevs_list": [ 00:07:31.095 { 00:07:31.095 "name": "BaseBdev1", 00:07:31.095 "uuid": "a5e82894-7ab0-5a57-a6ea-734aeb2d3770", 00:07:31.095 "is_configured": true, 00:07:31.095 "data_offset": 2048, 00:07:31.095 "data_size": 63488 00:07:31.095 }, 00:07:31.095 { 00:07:31.095 "name": "BaseBdev2", 00:07:31.095 "uuid": "7ea473c2-8988-5489-9a26-c8552a324d35", 00:07:31.095 "is_configured": true, 00:07:31.095 "data_offset": 2048, 00:07:31.095 "data_size": 63488 00:07:31.095 } 00:07:31.095 ] 00:07:31.095 }' 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.095 18:54:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.664 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:31.664 18:54:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:31.664 [2024-11-26 18:54:22.910221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.601 "name": "raid_bdev1", 00:07:32.601 "uuid": "089dff6b-e5d7-41a2-8718-406669f40f19", 00:07:32.601 "strip_size_kb": 64, 00:07:32.601 "state": "online", 00:07:32.601 "raid_level": "raid0", 00:07:32.601 "superblock": true, 00:07:32.601 "num_base_bdevs": 2, 00:07:32.601 "num_base_bdevs_discovered": 2, 00:07:32.601 "num_base_bdevs_operational": 2, 00:07:32.601 "base_bdevs_list": [ 00:07:32.601 { 00:07:32.601 "name": "BaseBdev1", 00:07:32.601 "uuid": "a5e82894-7ab0-5a57-a6ea-734aeb2d3770", 00:07:32.601 "is_configured": true, 00:07:32.601 "data_offset": 2048, 00:07:32.601 "data_size": 63488 00:07:32.601 }, 00:07:32.601 { 00:07:32.601 "name": "BaseBdev2", 00:07:32.601 "uuid": "7ea473c2-8988-5489-9a26-c8552a324d35", 00:07:32.601 "is_configured": true, 00:07:32.601 "data_offset": 2048, 00:07:32.601 "data_size": 63488 00:07:32.601 } 00:07:32.601 ] 00:07:32.601 }' 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.601 18:54:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.166 18:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:33.166 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.166 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.166 [2024-11-26 18:54:24.336340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.167 [2024-11-26 18:54:24.336555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.167 [2024-11-26 18:54:24.340208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.167 [2024-11-26 18:54:24.340512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.167 [2024-11-26 18:54:24.340727] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.167 [2024-11-26 18:54:24.340915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:07:33.167 "results": [ 00:07:33.167 { 00:07:33.167 "job": "raid_bdev1", 00:07:33.167 "core_mask": "0x1", 00:07:33.167 "workload": "randrw", 00:07:33.167 "percentage": 50, 00:07:33.167 "status": "finished", 00:07:33.167 "queue_depth": 1, 00:07:33.167 "io_size": 131072, 00:07:33.167 "runtime": 1.423881, 00:07:33.167 "iops": 10388.508590254381, 00:07:33.167 "mibps": 1298.5635737817977, 00:07:33.167 "io_failed": 1, 00:07:33.167 "io_timeout": 0, 00:07:33.167 "avg_latency_us": 134.30644813578905, 00:07:33.167 "min_latency_us": 40.49454545454545, 00:07:33.167 "max_latency_us": 1861.8181818181818 00:07:33.167 } 00:07:33.167 ], 00:07:33.167 "core_count": 1 00:07:33.167 } 00:07:33.167 te offline 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61525 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61525 ']' 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61525 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61525 00:07:33.167 killing process with pid 61525 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61525' 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61525 00:07:33.167 [2024-11-26 18:54:24.382673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.167 18:54:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61525 00:07:33.167 [2024-11-26 18:54:24.505946] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ifwvy7S6iv 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:34.540 00:07:34.540 real 0m4.554s 00:07:34.540 user 0m5.679s 00:07:34.540 sys 0m0.578s 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.540 ************************************ 00:07:34.540 18:54:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.540 END TEST raid_write_error_test 00:07:34.540 ************************************ 00:07:34.540 18:54:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:34.540 18:54:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:34.540 18:54:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.540 18:54:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.540 18:54:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.540 ************************************ 00:07:34.540 START TEST raid_state_function_test 00:07:34.540 ************************************ 00:07:34.540 18:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:34.540 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:34.540 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:34.540 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:34.540 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61664 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61664' 00:07:34.541 Process raid pid: 61664 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61664 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61664 ']' 00:07:34.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.541 18:54:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.541 [2024-11-26 18:54:25.802519] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:34.541 [2024-11-26 18:54:25.802741] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.799 [2024-11-26 18:54:25.995453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.057 [2024-11-26 18:54:26.163599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.057 [2024-11-26 18:54:26.391265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.057 [2024-11-26 18:54:26.391306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.626 [2024-11-26 18:54:26.804792] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.626 [2024-11-26 18:54:26.804875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.626 [2024-11-26 18:54:26.804892] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.626 [2024-11-26 18:54:26.804937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.626 "name": "Existed_Raid", 00:07:35.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.626 "strip_size_kb": 64, 00:07:35.626 "state": "configuring", 00:07:35.626 "raid_level": "concat", 00:07:35.626 "superblock": false, 00:07:35.626 "num_base_bdevs": 2, 00:07:35.626 "num_base_bdevs_discovered": 0, 00:07:35.626 "num_base_bdevs_operational": 2, 00:07:35.626 "base_bdevs_list": [ 00:07:35.626 { 00:07:35.626 "name": "BaseBdev1", 00:07:35.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.626 "is_configured": false, 00:07:35.626 "data_offset": 0, 00:07:35.626 "data_size": 0 00:07:35.626 }, 00:07:35.626 { 00:07:35.626 "name": "BaseBdev2", 00:07:35.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.626 "is_configured": false, 00:07:35.626 "data_offset": 0, 00:07:35.626 "data_size": 0 00:07:35.626 } 00:07:35.626 ] 00:07:35.626 }' 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.626 18:54:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.193 [2024-11-26 18:54:27.348858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.193 [2024-11-26 18:54:27.348900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.193 [2024-11-26 18:54:27.356879] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.193 [2024-11-26 18:54:27.356960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.193 [2024-11-26 18:54:27.356976] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.193 [2024-11-26 18:54:27.356995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.193 [2024-11-26 18:54:27.402851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.193 BaseBdev1 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.193 [ 00:07:36.193 { 00:07:36.193 "name": "BaseBdev1", 00:07:36.193 "aliases": [ 00:07:36.193 "7063f51a-9220-4343-a066-cb2f7afb26aa" 00:07:36.193 ], 00:07:36.193 "product_name": "Malloc disk", 00:07:36.193 "block_size": 512, 00:07:36.193 "num_blocks": 65536, 00:07:36.193 "uuid": "7063f51a-9220-4343-a066-cb2f7afb26aa", 00:07:36.193 "assigned_rate_limits": { 00:07:36.193 "rw_ios_per_sec": 0, 00:07:36.193 "rw_mbytes_per_sec": 0, 00:07:36.193 "r_mbytes_per_sec": 0, 00:07:36.193 "w_mbytes_per_sec": 0 00:07:36.193 }, 00:07:36.193 "claimed": true, 00:07:36.193 "claim_type": "exclusive_write", 00:07:36.193 "zoned": false, 00:07:36.193 "supported_io_types": { 00:07:36.193 "read": true, 00:07:36.193 "write": true, 00:07:36.193 "unmap": true, 00:07:36.193 "flush": true, 00:07:36.193 "reset": true, 00:07:36.193 "nvme_admin": false, 00:07:36.193 "nvme_io": false, 00:07:36.193 "nvme_io_md": false, 00:07:36.193 "write_zeroes": true, 00:07:36.193 "zcopy": true, 00:07:36.193 "get_zone_info": false, 00:07:36.193 "zone_management": false, 00:07:36.193 "zone_append": false, 00:07:36.193 "compare": false, 00:07:36.193 "compare_and_write": false, 00:07:36.193 "abort": true, 00:07:36.193 "seek_hole": false, 00:07:36.193 "seek_data": false, 00:07:36.193 "copy": true, 00:07:36.193 "nvme_iov_md": false 00:07:36.193 }, 00:07:36.193 "memory_domains": [ 00:07:36.193 { 00:07:36.193 "dma_device_id": "system", 00:07:36.193 "dma_device_type": 1 00:07:36.193 }, 00:07:36.193 { 00:07:36.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.193 "dma_device_type": 2 00:07:36.193 } 00:07:36.193 ], 00:07:36.193 "driver_specific": {} 00:07:36.193 } 00:07:36.193 ] 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.193 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.194 "name": "Existed_Raid", 00:07:36.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.194 "strip_size_kb": 64, 00:07:36.194 "state": "configuring", 00:07:36.194 "raid_level": "concat", 00:07:36.194 "superblock": false, 00:07:36.194 "num_base_bdevs": 2, 00:07:36.194 "num_base_bdevs_discovered": 1, 00:07:36.194 "num_base_bdevs_operational": 2, 00:07:36.194 "base_bdevs_list": [ 00:07:36.194 { 00:07:36.194 "name": "BaseBdev1", 00:07:36.194 "uuid": "7063f51a-9220-4343-a066-cb2f7afb26aa", 00:07:36.194 "is_configured": true, 00:07:36.194 "data_offset": 0, 00:07:36.194 "data_size": 65536 00:07:36.194 }, 00:07:36.194 { 00:07:36.194 "name": "BaseBdev2", 00:07:36.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.194 "is_configured": false, 00:07:36.194 "data_offset": 0, 00:07:36.194 "data_size": 0 00:07:36.194 } 00:07:36.194 ] 00:07:36.194 }' 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.194 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.761 [2024-11-26 18:54:27.955143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.761 [2024-11-26 18:54:27.955212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.761 [2024-11-26 18:54:27.963177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.761 [2024-11-26 18:54:27.965866] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.761 [2024-11-26 18:54:27.965958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.761 18:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.761 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.761 "name": "Existed_Raid", 00:07:36.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.761 "strip_size_kb": 64, 00:07:36.761 "state": "configuring", 00:07:36.761 "raid_level": "concat", 00:07:36.761 "superblock": false, 00:07:36.761 "num_base_bdevs": 2, 00:07:36.761 "num_base_bdevs_discovered": 1, 00:07:36.761 "num_base_bdevs_operational": 2, 00:07:36.761 "base_bdevs_list": [ 00:07:36.761 { 00:07:36.761 "name": "BaseBdev1", 00:07:36.761 "uuid": "7063f51a-9220-4343-a066-cb2f7afb26aa", 00:07:36.761 "is_configured": true, 00:07:36.761 "data_offset": 0, 00:07:36.761 "data_size": 65536 00:07:36.761 }, 00:07:36.761 { 00:07:36.761 "name": "BaseBdev2", 00:07:36.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.761 "is_configured": false, 00:07:36.761 "data_offset": 0, 00:07:36.761 "data_size": 0 00:07:36.761 } 00:07:36.761 ] 00:07:36.761 }' 00:07:36.761 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.761 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.328 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.328 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.328 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.328 [2024-11-26 18:54:28.523550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.328 [2024-11-26 18:54:28.523615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.328 [2024-11-26 18:54:28.523639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:37.328 [2024-11-26 18:54:28.523998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.328 [2024-11-26 18:54:28.524229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.328 [2024-11-26 18:54:28.524262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:37.328 [2024-11-26 18:54:28.524576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.328 BaseBdev2 00:07:37.328 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.328 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.328 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.328 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.328 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.329 [ 00:07:37.329 { 00:07:37.329 "name": "BaseBdev2", 00:07:37.329 "aliases": [ 00:07:37.329 "a8050b58-7d99-427a-bb68-cbe76b71029b" 00:07:37.329 ], 00:07:37.329 "product_name": "Malloc disk", 00:07:37.329 "block_size": 512, 00:07:37.329 "num_blocks": 65536, 00:07:37.329 "uuid": "a8050b58-7d99-427a-bb68-cbe76b71029b", 00:07:37.329 "assigned_rate_limits": { 00:07:37.329 "rw_ios_per_sec": 0, 00:07:37.329 "rw_mbytes_per_sec": 0, 00:07:37.329 "r_mbytes_per_sec": 0, 00:07:37.329 "w_mbytes_per_sec": 0 00:07:37.329 }, 00:07:37.329 "claimed": true, 00:07:37.329 "claim_type": "exclusive_write", 00:07:37.329 "zoned": false, 00:07:37.329 "supported_io_types": { 00:07:37.329 "read": true, 00:07:37.329 "write": true, 00:07:37.329 "unmap": true, 00:07:37.329 "flush": true, 00:07:37.329 "reset": true, 00:07:37.329 "nvme_admin": false, 00:07:37.329 "nvme_io": false, 00:07:37.329 "nvme_io_md": false, 00:07:37.329 "write_zeroes": true, 00:07:37.329 "zcopy": true, 00:07:37.329 "get_zone_info": false, 00:07:37.329 "zone_management": false, 00:07:37.329 "zone_append": false, 00:07:37.329 "compare": false, 00:07:37.329 "compare_and_write": false, 00:07:37.329 "abort": true, 00:07:37.329 "seek_hole": false, 00:07:37.329 "seek_data": false, 00:07:37.329 "copy": true, 00:07:37.329 "nvme_iov_md": false 00:07:37.329 }, 00:07:37.329 "memory_domains": [ 00:07:37.329 { 00:07:37.329 "dma_device_id": "system", 00:07:37.329 "dma_device_type": 1 00:07:37.329 }, 00:07:37.329 { 00:07:37.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.329 "dma_device_type": 2 00:07:37.329 } 00:07:37.329 ], 00:07:37.329 "driver_specific": {} 00:07:37.329 } 00:07:37.329 ] 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.329 "name": "Existed_Raid", 00:07:37.329 "uuid": "5a5d3b46-799f-42d3-b150-6f8c53d9caeb", 00:07:37.329 "strip_size_kb": 64, 00:07:37.329 "state": "online", 00:07:37.329 "raid_level": "concat", 00:07:37.329 "superblock": false, 00:07:37.329 "num_base_bdevs": 2, 00:07:37.329 "num_base_bdevs_discovered": 2, 00:07:37.329 "num_base_bdevs_operational": 2, 00:07:37.329 "base_bdevs_list": [ 00:07:37.329 { 00:07:37.329 "name": "BaseBdev1", 00:07:37.329 "uuid": "7063f51a-9220-4343-a066-cb2f7afb26aa", 00:07:37.329 "is_configured": true, 00:07:37.329 "data_offset": 0, 00:07:37.329 "data_size": 65536 00:07:37.329 }, 00:07:37.329 { 00:07:37.329 "name": "BaseBdev2", 00:07:37.329 "uuid": "a8050b58-7d99-427a-bb68-cbe76b71029b", 00:07:37.329 "is_configured": true, 00:07:37.329 "data_offset": 0, 00:07:37.329 "data_size": 65536 00:07:37.329 } 00:07:37.329 ] 00:07:37.329 }' 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.329 18:54:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.896 [2024-11-26 18:54:29.112280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.896 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.896 "name": "Existed_Raid", 00:07:37.896 "aliases": [ 00:07:37.896 "5a5d3b46-799f-42d3-b150-6f8c53d9caeb" 00:07:37.896 ], 00:07:37.896 "product_name": "Raid Volume", 00:07:37.896 "block_size": 512, 00:07:37.896 "num_blocks": 131072, 00:07:37.896 "uuid": "5a5d3b46-799f-42d3-b150-6f8c53d9caeb", 00:07:37.896 "assigned_rate_limits": { 00:07:37.896 "rw_ios_per_sec": 0, 00:07:37.896 "rw_mbytes_per_sec": 0, 00:07:37.896 "r_mbytes_per_sec": 0, 00:07:37.896 "w_mbytes_per_sec": 0 00:07:37.896 }, 00:07:37.896 "claimed": false, 00:07:37.897 "zoned": false, 00:07:37.897 "supported_io_types": { 00:07:37.897 "read": true, 00:07:37.897 "write": true, 00:07:37.897 "unmap": true, 00:07:37.897 "flush": true, 00:07:37.897 "reset": true, 00:07:37.897 "nvme_admin": false, 00:07:37.897 "nvme_io": false, 00:07:37.897 "nvme_io_md": false, 00:07:37.897 "write_zeroes": true, 00:07:37.897 "zcopy": false, 00:07:37.897 "get_zone_info": false, 00:07:37.897 "zone_management": false, 00:07:37.897 "zone_append": false, 00:07:37.897 "compare": false, 00:07:37.897 "compare_and_write": false, 00:07:37.897 "abort": false, 00:07:37.897 "seek_hole": false, 00:07:37.897 "seek_data": false, 00:07:37.897 "copy": false, 00:07:37.897 "nvme_iov_md": false 00:07:37.897 }, 00:07:37.897 "memory_domains": [ 00:07:37.897 { 00:07:37.897 "dma_device_id": "system", 00:07:37.897 "dma_device_type": 1 00:07:37.897 }, 00:07:37.897 { 00:07:37.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.897 "dma_device_type": 2 00:07:37.897 }, 00:07:37.897 { 00:07:37.897 "dma_device_id": "system", 00:07:37.897 "dma_device_type": 1 00:07:37.897 }, 00:07:37.897 { 00:07:37.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.897 "dma_device_type": 2 00:07:37.897 } 00:07:37.897 ], 00:07:37.897 "driver_specific": { 00:07:37.897 "raid": { 00:07:37.897 "uuid": "5a5d3b46-799f-42d3-b150-6f8c53d9caeb", 00:07:37.897 "strip_size_kb": 64, 00:07:37.897 "state": "online", 00:07:37.897 "raid_level": "concat", 00:07:37.897 "superblock": false, 00:07:37.897 "num_base_bdevs": 2, 00:07:37.897 "num_base_bdevs_discovered": 2, 00:07:37.897 "num_base_bdevs_operational": 2, 00:07:37.897 "base_bdevs_list": [ 00:07:37.897 { 00:07:37.897 "name": "BaseBdev1", 00:07:37.897 "uuid": "7063f51a-9220-4343-a066-cb2f7afb26aa", 00:07:37.897 "is_configured": true, 00:07:37.897 "data_offset": 0, 00:07:37.897 "data_size": 65536 00:07:37.897 }, 00:07:37.897 { 00:07:37.897 "name": "BaseBdev2", 00:07:37.897 "uuid": "a8050b58-7d99-427a-bb68-cbe76b71029b", 00:07:37.897 "is_configured": true, 00:07:37.897 "data_offset": 0, 00:07:37.897 "data_size": 65536 00:07:37.897 } 00:07:37.897 ] 00:07:37.897 } 00:07:37.897 } 00:07:37.897 }' 00:07:37.897 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.897 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:37.897 BaseBdev2' 00:07:37.897 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.897 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:37.897 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.897 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.897 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:37.897 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.897 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.155 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.155 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.155 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.155 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.155 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.155 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.155 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.155 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.156 [2024-11-26 18:54:29.348048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.156 [2024-11-26 18:54:29.348097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.156 [2024-11-26 18:54:29.348172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.156 "name": "Existed_Raid", 00:07:38.156 "uuid": "5a5d3b46-799f-42d3-b150-6f8c53d9caeb", 00:07:38.156 "strip_size_kb": 64, 00:07:38.156 "state": "offline", 00:07:38.156 "raid_level": "concat", 00:07:38.156 "superblock": false, 00:07:38.156 "num_base_bdevs": 2, 00:07:38.156 "num_base_bdevs_discovered": 1, 00:07:38.156 "num_base_bdevs_operational": 1, 00:07:38.156 "base_bdevs_list": [ 00:07:38.156 { 00:07:38.156 "name": null, 00:07:38.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.156 "is_configured": false, 00:07:38.156 "data_offset": 0, 00:07:38.156 "data_size": 65536 00:07:38.156 }, 00:07:38.156 { 00:07:38.156 "name": "BaseBdev2", 00:07:38.156 "uuid": "a8050b58-7d99-427a-bb68-cbe76b71029b", 00:07:38.156 "is_configured": true, 00:07:38.156 "data_offset": 0, 00:07:38.156 "data_size": 65536 00:07:38.156 } 00:07:38.156 ] 00:07:38.156 }' 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.156 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.721 18:54:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.721 [2024-11-26 18:54:29.990569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:38.721 [2024-11-26 18:54:29.990642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:38.721 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.721 18:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:38.721 18:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.721 18:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.721 18:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.721 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.721 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61664 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61664 ']' 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61664 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61664 00:07:38.979 killing process with pid 61664 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61664' 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61664 00:07:38.979 [2024-11-26 18:54:30.164673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.979 18:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61664 00:07:38.979 [2024-11-26 18:54:30.180573] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.935 ************************************ 00:07:39.935 END TEST raid_state_function_test 00:07:39.935 ************************************ 00:07:39.935 18:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:39.935 00:07:39.935 real 0m5.567s 00:07:39.935 user 0m8.391s 00:07:39.935 sys 0m0.807s 00:07:39.935 18:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.935 18:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.198 18:54:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:40.198 18:54:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.198 18:54:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.198 18:54:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.198 ************************************ 00:07:40.198 START TEST raid_state_function_test_sb 00:07:40.198 ************************************ 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61923 00:07:40.198 Process raid pid: 61923 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61923' 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61923 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61923 ']' 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.198 18:54:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.198 [2024-11-26 18:54:31.427271] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:40.198 [2024-11-26 18:54:31.427479] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.469 [2024-11-26 18:54:31.615356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.469 [2024-11-26 18:54:31.753558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.728 [2024-11-26 18:54:31.961302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.728 [2024-11-26 18:54:31.961363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.293 [2024-11-26 18:54:32.438339] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.293 [2024-11-26 18:54:32.438434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.293 [2024-11-26 18:54:32.438451] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.293 [2024-11-26 18:54:32.438466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.293 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.293 "name": "Existed_Raid", 00:07:41.293 "uuid": "03f79f23-9e6f-4987-9823-baa10f807b4c", 00:07:41.293 "strip_size_kb": 64, 00:07:41.293 "state": "configuring", 00:07:41.293 "raid_level": "concat", 00:07:41.293 "superblock": true, 00:07:41.293 "num_base_bdevs": 2, 00:07:41.293 "num_base_bdevs_discovered": 0, 00:07:41.293 "num_base_bdevs_operational": 2, 00:07:41.293 "base_bdevs_list": [ 00:07:41.293 { 00:07:41.293 "name": "BaseBdev1", 00:07:41.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.293 "is_configured": false, 00:07:41.293 "data_offset": 0, 00:07:41.293 "data_size": 0 00:07:41.293 }, 00:07:41.293 { 00:07:41.294 "name": "BaseBdev2", 00:07:41.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.294 "is_configured": false, 00:07:41.294 "data_offset": 0, 00:07:41.294 "data_size": 0 00:07:41.294 } 00:07:41.294 ] 00:07:41.294 }' 00:07:41.294 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.294 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.859 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.859 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.859 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.859 [2024-11-26 18:54:32.934408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.859 [2024-11-26 18:54:32.934471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:41.859 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.859 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.859 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.859 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.860 [2024-11-26 18:54:32.946454] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.860 [2024-11-26 18:54:32.946547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.860 [2024-11-26 18:54:32.946562] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.860 [2024-11-26 18:54:32.946581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.860 [2024-11-26 18:54:32.993989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.860 BaseBdev1 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.860 18:54:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.860 [ 00:07:41.860 { 00:07:41.860 "name": "BaseBdev1", 00:07:41.860 "aliases": [ 00:07:41.860 "c890187a-cac0-4946-80b0-a5690b9bb3b9" 00:07:41.860 ], 00:07:41.860 "product_name": "Malloc disk", 00:07:41.860 "block_size": 512, 00:07:41.860 "num_blocks": 65536, 00:07:41.860 "uuid": "c890187a-cac0-4946-80b0-a5690b9bb3b9", 00:07:41.860 "assigned_rate_limits": { 00:07:41.860 "rw_ios_per_sec": 0, 00:07:41.860 "rw_mbytes_per_sec": 0, 00:07:41.860 "r_mbytes_per_sec": 0, 00:07:41.860 "w_mbytes_per_sec": 0 00:07:41.860 }, 00:07:41.860 "claimed": true, 00:07:41.860 "claim_type": "exclusive_write", 00:07:41.860 "zoned": false, 00:07:41.860 "supported_io_types": { 00:07:41.860 "read": true, 00:07:41.860 "write": true, 00:07:41.860 "unmap": true, 00:07:41.860 "flush": true, 00:07:41.860 "reset": true, 00:07:41.860 "nvme_admin": false, 00:07:41.860 "nvme_io": false, 00:07:41.860 "nvme_io_md": false, 00:07:41.860 "write_zeroes": true, 00:07:41.860 "zcopy": true, 00:07:41.860 "get_zone_info": false, 00:07:41.860 "zone_management": false, 00:07:41.860 "zone_append": false, 00:07:41.860 "compare": false, 00:07:41.860 "compare_and_write": false, 00:07:41.860 "abort": true, 00:07:41.860 "seek_hole": false, 00:07:41.860 "seek_data": false, 00:07:41.860 "copy": true, 00:07:41.860 "nvme_iov_md": false 00:07:41.860 }, 00:07:41.860 "memory_domains": [ 00:07:41.860 { 00:07:41.860 "dma_device_id": "system", 00:07:41.860 "dma_device_type": 1 00:07:41.860 }, 00:07:41.860 { 00:07:41.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.860 "dma_device_type": 2 00:07:41.860 } 00:07:41.860 ], 00:07:41.860 "driver_specific": {} 00:07:41.860 } 00:07:41.860 ] 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.860 "name": "Existed_Raid", 00:07:41.860 "uuid": "b3759140-fc2e-4790-9eda-b6c4d921bbbc", 00:07:41.860 "strip_size_kb": 64, 00:07:41.860 "state": "configuring", 00:07:41.860 "raid_level": "concat", 00:07:41.860 "superblock": true, 00:07:41.860 "num_base_bdevs": 2, 00:07:41.860 "num_base_bdevs_discovered": 1, 00:07:41.860 "num_base_bdevs_operational": 2, 00:07:41.860 "base_bdevs_list": [ 00:07:41.860 { 00:07:41.860 "name": "BaseBdev1", 00:07:41.860 "uuid": "c890187a-cac0-4946-80b0-a5690b9bb3b9", 00:07:41.860 "is_configured": true, 00:07:41.860 "data_offset": 2048, 00:07:41.860 "data_size": 63488 00:07:41.860 }, 00:07:41.860 { 00:07:41.860 "name": "BaseBdev2", 00:07:41.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.860 "is_configured": false, 00:07:41.860 "data_offset": 0, 00:07:41.860 "data_size": 0 00:07:41.860 } 00:07:41.860 ] 00:07:41.860 }' 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.860 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 [2024-11-26 18:54:33.538245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.426 [2024-11-26 18:54:33.538343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 [2024-11-26 18:54:33.546306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.426 [2024-11-26 18:54:33.548973] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.426 [2024-11-26 18:54:33.549027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.426 "name": "Existed_Raid", 00:07:42.426 "uuid": "624763ff-a434-4549-afb2-b4835e8795fd", 00:07:42.426 "strip_size_kb": 64, 00:07:42.426 "state": "configuring", 00:07:42.426 "raid_level": "concat", 00:07:42.426 "superblock": true, 00:07:42.426 "num_base_bdevs": 2, 00:07:42.426 "num_base_bdevs_discovered": 1, 00:07:42.426 "num_base_bdevs_operational": 2, 00:07:42.426 "base_bdevs_list": [ 00:07:42.426 { 00:07:42.426 "name": "BaseBdev1", 00:07:42.426 "uuid": "c890187a-cac0-4946-80b0-a5690b9bb3b9", 00:07:42.426 "is_configured": true, 00:07:42.426 "data_offset": 2048, 00:07:42.426 "data_size": 63488 00:07:42.426 }, 00:07:42.426 { 00:07:42.426 "name": "BaseBdev2", 00:07:42.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.426 "is_configured": false, 00:07:42.426 "data_offset": 0, 00:07:42.426 "data_size": 0 00:07:42.426 } 00:07:42.426 ] 00:07:42.426 }' 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.426 18:54:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.992 [2024-11-26 18:54:34.098051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.992 [2024-11-26 18:54:34.098368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.992 [2024-11-26 18:54:34.098388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.992 [2024-11-26 18:54:34.098712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.992 BaseBdev2 00:07:42.992 [2024-11-26 18:54:34.098928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.992 [2024-11-26 18:54:34.098954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:42.992 [2024-11-26 18:54:34.099137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.992 [ 00:07:42.992 { 00:07:42.992 "name": "BaseBdev2", 00:07:42.992 "aliases": [ 00:07:42.992 "0a3f7335-ff5d-4906-916f-2308824738d8" 00:07:42.992 ], 00:07:42.992 "product_name": "Malloc disk", 00:07:42.992 "block_size": 512, 00:07:42.992 "num_blocks": 65536, 00:07:42.992 "uuid": "0a3f7335-ff5d-4906-916f-2308824738d8", 00:07:42.992 "assigned_rate_limits": { 00:07:42.992 "rw_ios_per_sec": 0, 00:07:42.992 "rw_mbytes_per_sec": 0, 00:07:42.992 "r_mbytes_per_sec": 0, 00:07:42.992 "w_mbytes_per_sec": 0 00:07:42.992 }, 00:07:42.992 "claimed": true, 00:07:42.992 "claim_type": "exclusive_write", 00:07:42.992 "zoned": false, 00:07:42.992 "supported_io_types": { 00:07:42.992 "read": true, 00:07:42.992 "write": true, 00:07:42.992 "unmap": true, 00:07:42.992 "flush": true, 00:07:42.992 "reset": true, 00:07:42.992 "nvme_admin": false, 00:07:42.992 "nvme_io": false, 00:07:42.992 "nvme_io_md": false, 00:07:42.992 "write_zeroes": true, 00:07:42.992 "zcopy": true, 00:07:42.992 "get_zone_info": false, 00:07:42.992 "zone_management": false, 00:07:42.992 "zone_append": false, 00:07:42.992 "compare": false, 00:07:42.992 "compare_and_write": false, 00:07:42.992 "abort": true, 00:07:42.992 "seek_hole": false, 00:07:42.992 "seek_data": false, 00:07:42.992 "copy": true, 00:07:42.992 "nvme_iov_md": false 00:07:42.992 }, 00:07:42.992 "memory_domains": [ 00:07:42.992 { 00:07:42.992 "dma_device_id": "system", 00:07:42.992 "dma_device_type": 1 00:07:42.992 }, 00:07:42.992 { 00:07:42.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.992 "dma_device_type": 2 00:07:42.992 } 00:07:42.992 ], 00:07:42.992 "driver_specific": {} 00:07:42.992 } 00:07:42.992 ] 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.992 "name": "Existed_Raid", 00:07:42.992 "uuid": "624763ff-a434-4549-afb2-b4835e8795fd", 00:07:42.992 "strip_size_kb": 64, 00:07:42.992 "state": "online", 00:07:42.992 "raid_level": "concat", 00:07:42.992 "superblock": true, 00:07:42.992 "num_base_bdevs": 2, 00:07:42.992 "num_base_bdevs_discovered": 2, 00:07:42.992 "num_base_bdevs_operational": 2, 00:07:42.992 "base_bdevs_list": [ 00:07:42.992 { 00:07:42.992 "name": "BaseBdev1", 00:07:42.992 "uuid": "c890187a-cac0-4946-80b0-a5690b9bb3b9", 00:07:42.992 "is_configured": true, 00:07:42.992 "data_offset": 2048, 00:07:42.992 "data_size": 63488 00:07:42.992 }, 00:07:42.992 { 00:07:42.992 "name": "BaseBdev2", 00:07:42.992 "uuid": "0a3f7335-ff5d-4906-916f-2308824738d8", 00:07:42.992 "is_configured": true, 00:07:42.992 "data_offset": 2048, 00:07:42.992 "data_size": 63488 00:07:42.992 } 00:07:42.992 ] 00:07:42.992 }' 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.992 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.557 [2024-11-26 18:54:34.654616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.557 "name": "Existed_Raid", 00:07:43.557 "aliases": [ 00:07:43.557 "624763ff-a434-4549-afb2-b4835e8795fd" 00:07:43.557 ], 00:07:43.557 "product_name": "Raid Volume", 00:07:43.557 "block_size": 512, 00:07:43.557 "num_blocks": 126976, 00:07:43.557 "uuid": "624763ff-a434-4549-afb2-b4835e8795fd", 00:07:43.557 "assigned_rate_limits": { 00:07:43.557 "rw_ios_per_sec": 0, 00:07:43.557 "rw_mbytes_per_sec": 0, 00:07:43.557 "r_mbytes_per_sec": 0, 00:07:43.557 "w_mbytes_per_sec": 0 00:07:43.557 }, 00:07:43.557 "claimed": false, 00:07:43.557 "zoned": false, 00:07:43.557 "supported_io_types": { 00:07:43.557 "read": true, 00:07:43.557 "write": true, 00:07:43.557 "unmap": true, 00:07:43.557 "flush": true, 00:07:43.557 "reset": true, 00:07:43.557 "nvme_admin": false, 00:07:43.557 "nvme_io": false, 00:07:43.557 "nvme_io_md": false, 00:07:43.557 "write_zeroes": true, 00:07:43.557 "zcopy": false, 00:07:43.557 "get_zone_info": false, 00:07:43.557 "zone_management": false, 00:07:43.557 "zone_append": false, 00:07:43.557 "compare": false, 00:07:43.557 "compare_and_write": false, 00:07:43.557 "abort": false, 00:07:43.557 "seek_hole": false, 00:07:43.557 "seek_data": false, 00:07:43.557 "copy": false, 00:07:43.557 "nvme_iov_md": false 00:07:43.557 }, 00:07:43.557 "memory_domains": [ 00:07:43.557 { 00:07:43.557 "dma_device_id": "system", 00:07:43.557 "dma_device_type": 1 00:07:43.557 }, 00:07:43.557 { 00:07:43.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.557 "dma_device_type": 2 00:07:43.557 }, 00:07:43.557 { 00:07:43.557 "dma_device_id": "system", 00:07:43.557 "dma_device_type": 1 00:07:43.557 }, 00:07:43.557 { 00:07:43.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.557 "dma_device_type": 2 00:07:43.557 } 00:07:43.557 ], 00:07:43.557 "driver_specific": { 00:07:43.557 "raid": { 00:07:43.557 "uuid": "624763ff-a434-4549-afb2-b4835e8795fd", 00:07:43.557 "strip_size_kb": 64, 00:07:43.557 "state": "online", 00:07:43.557 "raid_level": "concat", 00:07:43.557 "superblock": true, 00:07:43.557 "num_base_bdevs": 2, 00:07:43.557 "num_base_bdevs_discovered": 2, 00:07:43.557 "num_base_bdevs_operational": 2, 00:07:43.557 "base_bdevs_list": [ 00:07:43.557 { 00:07:43.557 "name": "BaseBdev1", 00:07:43.557 "uuid": "c890187a-cac0-4946-80b0-a5690b9bb3b9", 00:07:43.557 "is_configured": true, 00:07:43.557 "data_offset": 2048, 00:07:43.557 "data_size": 63488 00:07:43.557 }, 00:07:43.557 { 00:07:43.557 "name": "BaseBdev2", 00:07:43.557 "uuid": "0a3f7335-ff5d-4906-916f-2308824738d8", 00:07:43.557 "is_configured": true, 00:07:43.557 "data_offset": 2048, 00:07:43.557 "data_size": 63488 00:07:43.557 } 00:07:43.557 ] 00:07:43.557 } 00:07:43.557 } 00:07:43.557 }' 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:43.557 BaseBdev2' 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.557 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.815 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.815 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.815 18:54:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:43.815 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.815 18:54:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.815 [2024-11-26 18:54:34.930391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:43.815 [2024-11-26 18:54:34.930442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.815 [2024-11-26 18:54:34.930524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:43.815 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.816 "name": "Existed_Raid", 00:07:43.816 "uuid": "624763ff-a434-4549-afb2-b4835e8795fd", 00:07:43.816 "strip_size_kb": 64, 00:07:43.816 "state": "offline", 00:07:43.816 "raid_level": "concat", 00:07:43.816 "superblock": true, 00:07:43.816 "num_base_bdevs": 2, 00:07:43.816 "num_base_bdevs_discovered": 1, 00:07:43.816 "num_base_bdevs_operational": 1, 00:07:43.816 "base_bdevs_list": [ 00:07:43.816 { 00:07:43.816 "name": null, 00:07:43.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.816 "is_configured": false, 00:07:43.816 "data_offset": 0, 00:07:43.816 "data_size": 63488 00:07:43.816 }, 00:07:43.816 { 00:07:43.816 "name": "BaseBdev2", 00:07:43.816 "uuid": "0a3f7335-ff5d-4906-916f-2308824738d8", 00:07:43.816 "is_configured": true, 00:07:43.816 "data_offset": 2048, 00:07:43.816 "data_size": 63488 00:07:43.816 } 00:07:43.816 ] 00:07:43.816 }' 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.816 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.388 [2024-11-26 18:54:35.587242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:44.388 [2024-11-26 18:54:35.587322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61923 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61923 ']' 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61923 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.388 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61923 00:07:44.646 killing process with pid 61923 00:07:44.646 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.646 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.646 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61923' 00:07:44.646 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61923 00:07:44.646 [2024-11-26 18:54:35.768931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.646 18:54:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61923 00:07:44.646 [2024-11-26 18:54:35.784323] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.581 18:54:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:45.581 00:07:45.581 real 0m5.567s 00:07:45.581 user 0m8.426s 00:07:45.581 sys 0m0.755s 00:07:45.581 18:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.581 ************************************ 00:07:45.581 END TEST raid_state_function_test_sb 00:07:45.581 ************************************ 00:07:45.581 18:54:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.581 18:54:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:45.581 18:54:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:45.581 18:54:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.581 18:54:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.581 ************************************ 00:07:45.581 START TEST raid_superblock_test 00:07:45.581 ************************************ 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62175 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62175 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62175 ']' 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.581 18:54:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.840 [2024-11-26 18:54:37.044212] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:45.840 [2024-11-26 18:54:37.044417] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62175 ] 00:07:46.098 [2024-11-26 18:54:37.235854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.098 [2024-11-26 18:54:37.400448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.357 [2024-11-26 18:54:37.624187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.357 [2024-11-26 18:54:37.624257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.924 malloc1 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.924 [2024-11-26 18:54:38.067763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.924 [2024-11-26 18:54:38.067862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.924 [2024-11-26 18:54:38.067895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:46.924 [2024-11-26 18:54:38.067937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.924 [2024-11-26 18:54:38.070970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.924 [2024-11-26 18:54:38.071027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.924 pt1 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.924 malloc2 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.924 [2024-11-26 18:54:38.121024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.924 [2024-11-26 18:54:38.121091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.924 [2024-11-26 18:54:38.121128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:46.924 [2024-11-26 18:54:38.121143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.924 [2024-11-26 18:54:38.124021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.924 [2024-11-26 18:54:38.124061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.924 pt2 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.924 [2024-11-26 18:54:38.129111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.924 [2024-11-26 18:54:38.131697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.924 [2024-11-26 18:54:38.131987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:46.924 [2024-11-26 18:54:38.132013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.924 [2024-11-26 18:54:38.132329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.924 [2024-11-26 18:54:38.132528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:46.924 [2024-11-26 18:54:38.132554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:46.924 [2024-11-26 18:54:38.132738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.924 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.925 "name": "raid_bdev1", 00:07:46.925 "uuid": "4118e56f-09ea-42e9-a272-1f19b53e03b1", 00:07:46.925 "strip_size_kb": 64, 00:07:46.925 "state": "online", 00:07:46.925 "raid_level": "concat", 00:07:46.925 "superblock": true, 00:07:46.925 "num_base_bdevs": 2, 00:07:46.925 "num_base_bdevs_discovered": 2, 00:07:46.925 "num_base_bdevs_operational": 2, 00:07:46.925 "base_bdevs_list": [ 00:07:46.925 { 00:07:46.925 "name": "pt1", 00:07:46.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.925 "is_configured": true, 00:07:46.925 "data_offset": 2048, 00:07:46.925 "data_size": 63488 00:07:46.925 }, 00:07:46.925 { 00:07:46.925 "name": "pt2", 00:07:46.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.925 "is_configured": true, 00:07:46.925 "data_offset": 2048, 00:07:46.925 "data_size": 63488 00:07:46.925 } 00:07:46.925 ] 00:07:46.925 }' 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.925 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.490 [2024-11-26 18:54:38.665668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.490 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.490 "name": "raid_bdev1", 00:07:47.490 "aliases": [ 00:07:47.490 "4118e56f-09ea-42e9-a272-1f19b53e03b1" 00:07:47.490 ], 00:07:47.490 "product_name": "Raid Volume", 00:07:47.490 "block_size": 512, 00:07:47.490 "num_blocks": 126976, 00:07:47.490 "uuid": "4118e56f-09ea-42e9-a272-1f19b53e03b1", 00:07:47.490 "assigned_rate_limits": { 00:07:47.490 "rw_ios_per_sec": 0, 00:07:47.490 "rw_mbytes_per_sec": 0, 00:07:47.490 "r_mbytes_per_sec": 0, 00:07:47.490 "w_mbytes_per_sec": 0 00:07:47.490 }, 00:07:47.490 "claimed": false, 00:07:47.490 "zoned": false, 00:07:47.490 "supported_io_types": { 00:07:47.490 "read": true, 00:07:47.490 "write": true, 00:07:47.490 "unmap": true, 00:07:47.490 "flush": true, 00:07:47.490 "reset": true, 00:07:47.490 "nvme_admin": false, 00:07:47.490 "nvme_io": false, 00:07:47.490 "nvme_io_md": false, 00:07:47.490 "write_zeroes": true, 00:07:47.490 "zcopy": false, 00:07:47.490 "get_zone_info": false, 00:07:47.490 "zone_management": false, 00:07:47.490 "zone_append": false, 00:07:47.490 "compare": false, 00:07:47.490 "compare_and_write": false, 00:07:47.490 "abort": false, 00:07:47.490 "seek_hole": false, 00:07:47.490 "seek_data": false, 00:07:47.490 "copy": false, 00:07:47.490 "nvme_iov_md": false 00:07:47.490 }, 00:07:47.490 "memory_domains": [ 00:07:47.490 { 00:07:47.490 "dma_device_id": "system", 00:07:47.490 "dma_device_type": 1 00:07:47.490 }, 00:07:47.490 { 00:07:47.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.491 "dma_device_type": 2 00:07:47.491 }, 00:07:47.491 { 00:07:47.491 "dma_device_id": "system", 00:07:47.491 "dma_device_type": 1 00:07:47.491 }, 00:07:47.491 { 00:07:47.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.491 "dma_device_type": 2 00:07:47.491 } 00:07:47.491 ], 00:07:47.491 "driver_specific": { 00:07:47.491 "raid": { 00:07:47.491 "uuid": "4118e56f-09ea-42e9-a272-1f19b53e03b1", 00:07:47.491 "strip_size_kb": 64, 00:07:47.491 "state": "online", 00:07:47.491 "raid_level": "concat", 00:07:47.491 "superblock": true, 00:07:47.491 "num_base_bdevs": 2, 00:07:47.491 "num_base_bdevs_discovered": 2, 00:07:47.491 "num_base_bdevs_operational": 2, 00:07:47.491 "base_bdevs_list": [ 00:07:47.491 { 00:07:47.491 "name": "pt1", 00:07:47.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.491 "is_configured": true, 00:07:47.491 "data_offset": 2048, 00:07:47.491 "data_size": 63488 00:07:47.491 }, 00:07:47.491 { 00:07:47.491 "name": "pt2", 00:07:47.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.491 "is_configured": true, 00:07:47.491 "data_offset": 2048, 00:07:47.491 "data_size": 63488 00:07:47.491 } 00:07:47.491 ] 00:07:47.491 } 00:07:47.491 } 00:07:47.491 }' 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:47.491 pt2' 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.491 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.747 [2024-11-26 18:54:38.929773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4118e56f-09ea-42e9-a272-1f19b53e03b1 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4118e56f-09ea-42e9-a272-1f19b53e03b1 ']' 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.747 [2024-11-26 18:54:38.977328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.747 [2024-11-26 18:54:38.977402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.747 [2024-11-26 18:54:38.977518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.747 [2024-11-26 18:54:38.977614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.747 [2024-11-26 18:54:38.977640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.747 18:54:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.747 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:47.747 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:47.747 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:47.747 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.748 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.007 [2024-11-26 18:54:39.117467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:48.007 [2024-11-26 18:54:39.120172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:48.007 [2024-11-26 18:54:39.120265] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:48.007 [2024-11-26 18:54:39.120382] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:48.007 [2024-11-26 18:54:39.120407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.007 [2024-11-26 18:54:39.120439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:48.007 request: 00:07:48.007 { 00:07:48.007 "name": "raid_bdev1", 00:07:48.007 "raid_level": "concat", 00:07:48.007 "base_bdevs": [ 00:07:48.007 "malloc1", 00:07:48.007 "malloc2" 00:07:48.007 ], 00:07:48.007 "strip_size_kb": 64, 00:07:48.007 "superblock": false, 00:07:48.007 "method": "bdev_raid_create", 00:07:48.007 "req_id": 1 00:07:48.007 } 00:07:48.007 Got JSON-RPC error response 00:07:48.007 response: 00:07:48.007 { 00:07:48.007 "code": -17, 00:07:48.007 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:48.007 } 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.007 [2024-11-26 18:54:39.181418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:48.007 [2024-11-26 18:54:39.181501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.007 [2024-11-26 18:54:39.181530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:48.007 [2024-11-26 18:54:39.181548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.007 [2024-11-26 18:54:39.184597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.007 [2024-11-26 18:54:39.184649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:48.007 [2024-11-26 18:54:39.184768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:48.007 [2024-11-26 18:54:39.184843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:48.007 pt1 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.007 "name": "raid_bdev1", 00:07:48.007 "uuid": "4118e56f-09ea-42e9-a272-1f19b53e03b1", 00:07:48.007 "strip_size_kb": 64, 00:07:48.007 "state": "configuring", 00:07:48.007 "raid_level": "concat", 00:07:48.007 "superblock": true, 00:07:48.007 "num_base_bdevs": 2, 00:07:48.007 "num_base_bdevs_discovered": 1, 00:07:48.007 "num_base_bdevs_operational": 2, 00:07:48.007 "base_bdevs_list": [ 00:07:48.007 { 00:07:48.007 "name": "pt1", 00:07:48.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.007 "is_configured": true, 00:07:48.007 "data_offset": 2048, 00:07:48.007 "data_size": 63488 00:07:48.007 }, 00:07:48.007 { 00:07:48.007 "name": null, 00:07:48.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.007 "is_configured": false, 00:07:48.007 "data_offset": 2048, 00:07:48.007 "data_size": 63488 00:07:48.007 } 00:07:48.007 ] 00:07:48.007 }' 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.007 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.574 [2024-11-26 18:54:39.737640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:48.574 [2024-11-26 18:54:39.737748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.574 [2024-11-26 18:54:39.737781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:48.574 [2024-11-26 18:54:39.737800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.574 [2024-11-26 18:54:39.738411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.574 [2024-11-26 18:54:39.738460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:48.574 [2024-11-26 18:54:39.738584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:48.574 [2024-11-26 18:54:39.738625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.574 [2024-11-26 18:54:39.738769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:48.574 [2024-11-26 18:54:39.738790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.574 [2024-11-26 18:54:39.739148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:48.574 [2024-11-26 18:54:39.739337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:48.574 [2024-11-26 18:54:39.739361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:48.574 [2024-11-26 18:54:39.739536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.574 pt2 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.574 "name": "raid_bdev1", 00:07:48.574 "uuid": "4118e56f-09ea-42e9-a272-1f19b53e03b1", 00:07:48.574 "strip_size_kb": 64, 00:07:48.574 "state": "online", 00:07:48.574 "raid_level": "concat", 00:07:48.574 "superblock": true, 00:07:48.574 "num_base_bdevs": 2, 00:07:48.574 "num_base_bdevs_discovered": 2, 00:07:48.574 "num_base_bdevs_operational": 2, 00:07:48.574 "base_bdevs_list": [ 00:07:48.574 { 00:07:48.574 "name": "pt1", 00:07:48.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.574 "is_configured": true, 00:07:48.574 "data_offset": 2048, 00:07:48.574 "data_size": 63488 00:07:48.574 }, 00:07:48.574 { 00:07:48.574 "name": "pt2", 00:07:48.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.574 "is_configured": true, 00:07:48.574 "data_offset": 2048, 00:07:48.574 "data_size": 63488 00:07:48.574 } 00:07:48.574 ] 00:07:48.574 }' 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.574 18:54:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.142 [2024-11-26 18:54:40.278112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.142 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.143 "name": "raid_bdev1", 00:07:49.143 "aliases": [ 00:07:49.143 "4118e56f-09ea-42e9-a272-1f19b53e03b1" 00:07:49.143 ], 00:07:49.143 "product_name": "Raid Volume", 00:07:49.143 "block_size": 512, 00:07:49.143 "num_blocks": 126976, 00:07:49.143 "uuid": "4118e56f-09ea-42e9-a272-1f19b53e03b1", 00:07:49.143 "assigned_rate_limits": { 00:07:49.143 "rw_ios_per_sec": 0, 00:07:49.143 "rw_mbytes_per_sec": 0, 00:07:49.143 "r_mbytes_per_sec": 0, 00:07:49.143 "w_mbytes_per_sec": 0 00:07:49.143 }, 00:07:49.143 "claimed": false, 00:07:49.143 "zoned": false, 00:07:49.143 "supported_io_types": { 00:07:49.143 "read": true, 00:07:49.143 "write": true, 00:07:49.143 "unmap": true, 00:07:49.143 "flush": true, 00:07:49.143 "reset": true, 00:07:49.143 "nvme_admin": false, 00:07:49.143 "nvme_io": false, 00:07:49.143 "nvme_io_md": false, 00:07:49.143 "write_zeroes": true, 00:07:49.143 "zcopy": false, 00:07:49.143 "get_zone_info": false, 00:07:49.143 "zone_management": false, 00:07:49.143 "zone_append": false, 00:07:49.143 "compare": false, 00:07:49.143 "compare_and_write": false, 00:07:49.143 "abort": false, 00:07:49.143 "seek_hole": false, 00:07:49.143 "seek_data": false, 00:07:49.143 "copy": false, 00:07:49.143 "nvme_iov_md": false 00:07:49.143 }, 00:07:49.143 "memory_domains": [ 00:07:49.143 { 00:07:49.143 "dma_device_id": "system", 00:07:49.143 "dma_device_type": 1 00:07:49.143 }, 00:07:49.143 { 00:07:49.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.143 "dma_device_type": 2 00:07:49.143 }, 00:07:49.143 { 00:07:49.143 "dma_device_id": "system", 00:07:49.143 "dma_device_type": 1 00:07:49.143 }, 00:07:49.143 { 00:07:49.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.143 "dma_device_type": 2 00:07:49.143 } 00:07:49.143 ], 00:07:49.143 "driver_specific": { 00:07:49.143 "raid": { 00:07:49.143 "uuid": "4118e56f-09ea-42e9-a272-1f19b53e03b1", 00:07:49.143 "strip_size_kb": 64, 00:07:49.143 "state": "online", 00:07:49.143 "raid_level": "concat", 00:07:49.143 "superblock": true, 00:07:49.143 "num_base_bdevs": 2, 00:07:49.143 "num_base_bdevs_discovered": 2, 00:07:49.143 "num_base_bdevs_operational": 2, 00:07:49.143 "base_bdevs_list": [ 00:07:49.143 { 00:07:49.143 "name": "pt1", 00:07:49.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.143 "is_configured": true, 00:07:49.143 "data_offset": 2048, 00:07:49.143 "data_size": 63488 00:07:49.143 }, 00:07:49.143 { 00:07:49.143 "name": "pt2", 00:07:49.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.143 "is_configured": true, 00:07:49.143 "data_offset": 2048, 00:07:49.143 "data_size": 63488 00:07:49.143 } 00:07:49.143 ] 00:07:49.143 } 00:07:49.143 } 00:07:49.143 }' 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:49.143 pt2' 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.143 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.402 [2024-11-26 18:54:40.554190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4118e56f-09ea-42e9-a272-1f19b53e03b1 '!=' 4118e56f-09ea-42e9-a272-1f19b53e03b1 ']' 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62175 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62175 ']' 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62175 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62175 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.402 killing process with pid 62175 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62175' 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62175 00:07:49.402 [2024-11-26 18:54:40.635306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.402 18:54:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62175 00:07:49.402 [2024-11-26 18:54:40.635431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.402 [2024-11-26 18:54:40.635502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.402 [2024-11-26 18:54:40.635533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:49.661 [2024-11-26 18:54:40.827686] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.597 18:54:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:50.597 00:07:50.597 real 0m4.976s 00:07:50.597 user 0m7.339s 00:07:50.597 sys 0m0.724s 00:07:50.597 18:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.597 18:54:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.597 ************************************ 00:07:50.597 END TEST raid_superblock_test 00:07:50.597 ************************************ 00:07:50.597 18:54:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:50.597 18:54:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.597 18:54:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.597 18:54:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.856 ************************************ 00:07:50.856 START TEST raid_read_error_test 00:07:50.856 ************************************ 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DnDe4T4Wo2 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62392 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62392 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62392 ']' 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.856 18:54:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.856 [2024-11-26 18:54:42.075932] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:50.856 [2024-11-26 18:54:42.076130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62392 ] 00:07:51.116 [2024-11-26 18:54:42.252579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.116 [2024-11-26 18:54:42.388396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.441 [2024-11-26 18:54:42.599668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.441 [2024-11-26 18:54:42.599764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.022 BaseBdev1_malloc 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.022 true 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.022 [2024-11-26 18:54:43.185369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:52.022 [2024-11-26 18:54:43.185465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.022 [2024-11-26 18:54:43.185495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:52.022 [2024-11-26 18:54:43.185512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.022 [2024-11-26 18:54:43.188634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.022 [2024-11-26 18:54:43.188684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:52.022 BaseBdev1 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.022 BaseBdev2_malloc 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.022 true 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.022 [2024-11-26 18:54:43.245096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:52.022 [2024-11-26 18:54:43.245193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.022 [2024-11-26 18:54:43.245232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:52.022 [2024-11-26 18:54:43.245248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.022 [2024-11-26 18:54:43.248177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.022 [2024-11-26 18:54:43.248237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:52.022 BaseBdev2 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.022 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.023 [2024-11-26 18:54:43.253303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.023 [2024-11-26 18:54:43.255884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.023 [2024-11-26 18:54:43.256184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.023 [2024-11-26 18:54:43.256213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.023 [2024-11-26 18:54:43.256513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:52.023 [2024-11-26 18:54:43.256744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.023 [2024-11-26 18:54:43.256776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:52.023 [2024-11-26 18:54:43.256985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.023 "name": "raid_bdev1", 00:07:52.023 "uuid": "4f4f8a84-4832-43f8-a100-7bdcb934aa0b", 00:07:52.023 "strip_size_kb": 64, 00:07:52.023 "state": "online", 00:07:52.023 "raid_level": "concat", 00:07:52.023 "superblock": true, 00:07:52.023 "num_base_bdevs": 2, 00:07:52.023 "num_base_bdevs_discovered": 2, 00:07:52.023 "num_base_bdevs_operational": 2, 00:07:52.023 "base_bdevs_list": [ 00:07:52.023 { 00:07:52.023 "name": "BaseBdev1", 00:07:52.023 "uuid": "ec18963a-0a8e-51f1-a0a6-a36141d51d2c", 00:07:52.023 "is_configured": true, 00:07:52.023 "data_offset": 2048, 00:07:52.023 "data_size": 63488 00:07:52.023 }, 00:07:52.023 { 00:07:52.023 "name": "BaseBdev2", 00:07:52.023 "uuid": "930ba2eb-07ed-5471-88c7-666f31797de4", 00:07:52.023 "is_configured": true, 00:07:52.023 "data_offset": 2048, 00:07:52.023 "data_size": 63488 00:07:52.023 } 00:07:52.023 ] 00:07:52.023 }' 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.023 18:54:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.591 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:52.591 18:54:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:52.591 [2024-11-26 18:54:43.850765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.530 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.530 "name": "raid_bdev1", 00:07:53.530 "uuid": "4f4f8a84-4832-43f8-a100-7bdcb934aa0b", 00:07:53.530 "strip_size_kb": 64, 00:07:53.530 "state": "online", 00:07:53.530 "raid_level": "concat", 00:07:53.530 "superblock": true, 00:07:53.530 "num_base_bdevs": 2, 00:07:53.530 "num_base_bdevs_discovered": 2, 00:07:53.530 "num_base_bdevs_operational": 2, 00:07:53.530 "base_bdevs_list": [ 00:07:53.530 { 00:07:53.530 "name": "BaseBdev1", 00:07:53.530 "uuid": "ec18963a-0a8e-51f1-a0a6-a36141d51d2c", 00:07:53.530 "is_configured": true, 00:07:53.530 "data_offset": 2048, 00:07:53.530 "data_size": 63488 00:07:53.530 }, 00:07:53.530 { 00:07:53.530 "name": "BaseBdev2", 00:07:53.530 "uuid": "930ba2eb-07ed-5471-88c7-666f31797de4", 00:07:53.530 "is_configured": true, 00:07:53.530 "data_offset": 2048, 00:07:53.530 "data_size": 63488 00:07:53.530 } 00:07:53.531 ] 00:07:53.531 }' 00:07:53.531 18:54:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.531 18:54:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.100 [2024-11-26 18:54:45.274763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.100 [2024-11-26 18:54:45.274827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.100 [2024-11-26 18:54:45.278429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.100 [2024-11-26 18:54:45.278505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.100 [2024-11-26 18:54:45.278551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.100 [2024-11-26 18:54:45.278573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:54.100 { 00:07:54.100 "results": [ 00:07:54.100 { 00:07:54.100 "job": "raid_bdev1", 00:07:54.100 "core_mask": "0x1", 00:07:54.100 "workload": "randrw", 00:07:54.100 "percentage": 50, 00:07:54.100 "status": "finished", 00:07:54.100 "queue_depth": 1, 00:07:54.100 "io_size": 131072, 00:07:54.100 "runtime": 1.421512, 00:07:54.100 "iops": 10690.729307948157, 00:07:54.100 "mibps": 1336.3411634935196, 00:07:54.100 "io_failed": 1, 00:07:54.100 "io_timeout": 0, 00:07:54.100 "avg_latency_us": 130.75645072916294, 00:07:54.100 "min_latency_us": 37.236363636363635, 00:07:54.100 "max_latency_us": 1899.0545454545454 00:07:54.100 } 00:07:54.100 ], 00:07:54.100 "core_count": 1 00:07:54.100 } 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62392 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62392 ']' 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62392 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62392 00:07:54.100 killing process with pid 62392 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62392' 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62392 00:07:54.100 [2024-11-26 18:54:45.317534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.100 18:54:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62392 00:07:54.100 [2024-11-26 18:54:45.440228] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DnDe4T4Wo2 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:55.504 00:07:55.504 real 0m4.602s 00:07:55.504 user 0m5.761s 00:07:55.504 sys 0m0.576s 00:07:55.504 ************************************ 00:07:55.504 END TEST raid_read_error_test 00:07:55.504 ************************************ 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.504 18:54:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.504 18:54:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:55.504 18:54:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:55.504 18:54:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.504 18:54:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.504 ************************************ 00:07:55.504 START TEST raid_write_error_test 00:07:55.504 ************************************ 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WD3kDhvWpe 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62538 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62538 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62538 ']' 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.504 18:54:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.504 [2024-11-26 18:54:46.759748] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:07:55.504 [2024-11-26 18:54:46.759979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62538 ] 00:07:55.762 [2024-11-26 18:54:46.950140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.762 [2024-11-26 18:54:47.082772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.021 [2024-11-26 18:54:47.290487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.021 [2024-11-26 18:54:47.290781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.588 BaseBdev1_malloc 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.588 true 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.588 [2024-11-26 18:54:47.790362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:56.588 [2024-11-26 18:54:47.790426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.588 [2024-11-26 18:54:47.790454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:56.588 [2024-11-26 18:54:47.790470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.588 [2024-11-26 18:54:47.793384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.588 [2024-11-26 18:54:47.793446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:56.588 BaseBdev1 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.588 BaseBdev2_malloc 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.588 true 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.588 [2024-11-26 18:54:47.848231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:56.588 [2024-11-26 18:54:47.848326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.588 [2024-11-26 18:54:47.848350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:56.588 [2024-11-26 18:54:47.848366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.588 [2024-11-26 18:54:47.851298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.588 [2024-11-26 18:54:47.851348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:56.588 BaseBdev2 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.588 [2024-11-26 18:54:47.856328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.588 [2024-11-26 18:54:47.859033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.588 [2024-11-26 18:54:47.859343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.588 [2024-11-26 18:54:47.859367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:56.588 [2024-11-26 18:54:47.859678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:56.588 [2024-11-26 18:54:47.859924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.588 [2024-11-26 18:54:47.859955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:56.588 [2024-11-26 18:54:47.860152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.588 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.589 "name": "raid_bdev1", 00:07:56.589 "uuid": "0a401d7d-9655-498d-8d6c-f1f9044e139c", 00:07:56.589 "strip_size_kb": 64, 00:07:56.589 "state": "online", 00:07:56.589 "raid_level": "concat", 00:07:56.589 "superblock": true, 00:07:56.589 "num_base_bdevs": 2, 00:07:56.589 "num_base_bdevs_discovered": 2, 00:07:56.589 "num_base_bdevs_operational": 2, 00:07:56.589 "base_bdevs_list": [ 00:07:56.589 { 00:07:56.589 "name": "BaseBdev1", 00:07:56.589 "uuid": "091a92d5-0856-5cf4-8d2f-c839eec706bb", 00:07:56.589 "is_configured": true, 00:07:56.589 "data_offset": 2048, 00:07:56.589 "data_size": 63488 00:07:56.589 }, 00:07:56.589 { 00:07:56.589 "name": "BaseBdev2", 00:07:56.589 "uuid": "527a17e8-afd3-59af-a9da-cda0f49163c2", 00:07:56.589 "is_configured": true, 00:07:56.589 "data_offset": 2048, 00:07:56.589 "data_size": 63488 00:07:56.589 } 00:07:56.589 ] 00:07:56.589 }' 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.589 18:54:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.156 18:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:57.156 18:54:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:57.156 [2024-11-26 18:54:48.441973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.091 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.092 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.092 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.092 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.092 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.092 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.092 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.092 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.092 "name": "raid_bdev1", 00:07:58.092 "uuid": "0a401d7d-9655-498d-8d6c-f1f9044e139c", 00:07:58.092 "strip_size_kb": 64, 00:07:58.092 "state": "online", 00:07:58.092 "raid_level": "concat", 00:07:58.092 "superblock": true, 00:07:58.092 "num_base_bdevs": 2, 00:07:58.092 "num_base_bdevs_discovered": 2, 00:07:58.092 "num_base_bdevs_operational": 2, 00:07:58.092 "base_bdevs_list": [ 00:07:58.092 { 00:07:58.092 "name": "BaseBdev1", 00:07:58.092 "uuid": "091a92d5-0856-5cf4-8d2f-c839eec706bb", 00:07:58.092 "is_configured": true, 00:07:58.092 "data_offset": 2048, 00:07:58.092 "data_size": 63488 00:07:58.092 }, 00:07:58.092 { 00:07:58.092 "name": "BaseBdev2", 00:07:58.092 "uuid": "527a17e8-afd3-59af-a9da-cda0f49163c2", 00:07:58.092 "is_configured": true, 00:07:58.092 "data_offset": 2048, 00:07:58.092 "data_size": 63488 00:07:58.092 } 00:07:58.092 ] 00:07:58.092 }' 00:07:58.092 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.092 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.658 [2024-11-26 18:54:49.845452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.658 [2024-11-26 18:54:49.845505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.658 [2024-11-26 18:54:49.848992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.658 [2024-11-26 18:54:49.849059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.658 [2024-11-26 18:54:49.849106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.658 [2024-11-26 18:54:49.849124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:58.658 { 00:07:58.658 "results": [ 00:07:58.658 { 00:07:58.658 "job": "raid_bdev1", 00:07:58.658 "core_mask": "0x1", 00:07:58.658 "workload": "randrw", 00:07:58.658 "percentage": 50, 00:07:58.658 "status": "finished", 00:07:58.658 "queue_depth": 1, 00:07:58.658 "io_size": 131072, 00:07:58.658 "runtime": 1.400802, 00:07:58.658 "iops": 10588.220176727333, 00:07:58.658 "mibps": 1323.5275220909166, 00:07:58.658 "io_failed": 1, 00:07:58.658 "io_timeout": 0, 00:07:58.658 "avg_latency_us": 131.76106445701538, 00:07:58.658 "min_latency_us": 37.93454545454546, 00:07:58.658 "max_latency_us": 1966.08 00:07:58.658 } 00:07:58.658 ], 00:07:58.658 "core_count": 1 00:07:58.658 } 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62538 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62538 ']' 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62538 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62538 00:07:58.658 killing process with pid 62538 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62538' 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62538 00:07:58.658 [2024-11-26 18:54:49.890557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.658 18:54:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62538 00:07:58.658 [2024-11-26 18:54:50.020337] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WD3kDhvWpe 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:00.034 00:08:00.034 real 0m4.547s 00:08:00.034 user 0m5.641s 00:08:00.034 sys 0m0.556s 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.034 18:54:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.034 ************************************ 00:08:00.034 END TEST raid_write_error_test 00:08:00.034 ************************************ 00:08:00.034 18:54:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:00.035 18:54:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:00.035 18:54:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.035 18:54:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.035 18:54:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.035 ************************************ 00:08:00.035 START TEST raid_state_function_test 00:08:00.035 ************************************ 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62681 00:08:00.035 Process raid pid: 62681 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62681' 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62681 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62681 ']' 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.035 18:54:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.035 [2024-11-26 18:54:51.346318] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:08:00.035 [2024-11-26 18:54:51.346501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.293 [2024-11-26 18:54:51.537563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.551 [2024-11-26 18:54:51.669665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.551 [2024-11-26 18:54:51.877968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.551 [2024-11-26 18:54:51.878059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 [2024-11-26 18:54:52.362575] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.118 [2024-11-26 18:54:52.362659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.118 [2024-11-26 18:54:52.362678] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.118 [2024-11-26 18:54:52.362694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.118 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.119 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.119 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.119 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.119 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.119 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.119 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.119 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.119 "name": "Existed_Raid", 00:08:01.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.119 "strip_size_kb": 0, 00:08:01.119 "state": "configuring", 00:08:01.119 "raid_level": "raid1", 00:08:01.119 "superblock": false, 00:08:01.119 "num_base_bdevs": 2, 00:08:01.119 "num_base_bdevs_discovered": 0, 00:08:01.119 "num_base_bdevs_operational": 2, 00:08:01.119 "base_bdevs_list": [ 00:08:01.119 { 00:08:01.119 "name": "BaseBdev1", 00:08:01.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.119 "is_configured": false, 00:08:01.119 "data_offset": 0, 00:08:01.119 "data_size": 0 00:08:01.119 }, 00:08:01.119 { 00:08:01.119 "name": "BaseBdev2", 00:08:01.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.119 "is_configured": false, 00:08:01.119 "data_offset": 0, 00:08:01.119 "data_size": 0 00:08:01.119 } 00:08:01.119 ] 00:08:01.119 }' 00:08:01.119 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.119 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 [2024-11-26 18:54:52.882684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.685 [2024-11-26 18:54:52.882745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 [2024-11-26 18:54:52.890630] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.685 [2024-11-26 18:54:52.890680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.685 [2024-11-26 18:54:52.890695] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.685 [2024-11-26 18:54:52.890714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 [2024-11-26 18:54:52.936261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.685 BaseBdev1 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 [ 00:08:01.685 { 00:08:01.685 "name": "BaseBdev1", 00:08:01.685 "aliases": [ 00:08:01.685 "b308c275-5097-45b4-8928-9e61c5901158" 00:08:01.685 ], 00:08:01.685 "product_name": "Malloc disk", 00:08:01.685 "block_size": 512, 00:08:01.685 "num_blocks": 65536, 00:08:01.685 "uuid": "b308c275-5097-45b4-8928-9e61c5901158", 00:08:01.685 "assigned_rate_limits": { 00:08:01.685 "rw_ios_per_sec": 0, 00:08:01.685 "rw_mbytes_per_sec": 0, 00:08:01.685 "r_mbytes_per_sec": 0, 00:08:01.685 "w_mbytes_per_sec": 0 00:08:01.685 }, 00:08:01.685 "claimed": true, 00:08:01.685 "claim_type": "exclusive_write", 00:08:01.685 "zoned": false, 00:08:01.685 "supported_io_types": { 00:08:01.685 "read": true, 00:08:01.685 "write": true, 00:08:01.685 "unmap": true, 00:08:01.685 "flush": true, 00:08:01.685 "reset": true, 00:08:01.685 "nvme_admin": false, 00:08:01.685 "nvme_io": false, 00:08:01.685 "nvme_io_md": false, 00:08:01.685 "write_zeroes": true, 00:08:01.685 "zcopy": true, 00:08:01.685 "get_zone_info": false, 00:08:01.685 "zone_management": false, 00:08:01.685 "zone_append": false, 00:08:01.685 "compare": false, 00:08:01.685 "compare_and_write": false, 00:08:01.685 "abort": true, 00:08:01.685 "seek_hole": false, 00:08:01.685 "seek_data": false, 00:08:01.685 "copy": true, 00:08:01.685 "nvme_iov_md": false 00:08:01.685 }, 00:08:01.685 "memory_domains": [ 00:08:01.685 { 00:08:01.685 "dma_device_id": "system", 00:08:01.685 "dma_device_type": 1 00:08:01.685 }, 00:08:01.685 { 00:08:01.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.685 "dma_device_type": 2 00:08:01.685 } 00:08:01.685 ], 00:08:01.685 "driver_specific": {} 00:08:01.685 } 00:08:01.685 ] 00:08:01.685 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.686 18:54:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.686 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.686 "name": "Existed_Raid", 00:08:01.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.686 "strip_size_kb": 0, 00:08:01.686 "state": "configuring", 00:08:01.686 "raid_level": "raid1", 00:08:01.686 "superblock": false, 00:08:01.686 "num_base_bdevs": 2, 00:08:01.686 "num_base_bdevs_discovered": 1, 00:08:01.686 "num_base_bdevs_operational": 2, 00:08:01.686 "base_bdevs_list": [ 00:08:01.686 { 00:08:01.686 "name": "BaseBdev1", 00:08:01.686 "uuid": "b308c275-5097-45b4-8928-9e61c5901158", 00:08:01.686 "is_configured": true, 00:08:01.686 "data_offset": 0, 00:08:01.686 "data_size": 65536 00:08:01.686 }, 00:08:01.686 { 00:08:01.686 "name": "BaseBdev2", 00:08:01.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.686 "is_configured": false, 00:08:01.686 "data_offset": 0, 00:08:01.686 "data_size": 0 00:08:01.686 } 00:08:01.686 ] 00:08:01.686 }' 00:08:01.686 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.686 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.253 [2024-11-26 18:54:53.464452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.253 [2024-11-26 18:54:53.464520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.253 [2024-11-26 18:54:53.472479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.253 [2024-11-26 18:54:53.474985] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.253 [2024-11-26 18:54:53.475053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.253 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.253 "name": "Existed_Raid", 00:08:02.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.253 "strip_size_kb": 0, 00:08:02.253 "state": "configuring", 00:08:02.253 "raid_level": "raid1", 00:08:02.253 "superblock": false, 00:08:02.253 "num_base_bdevs": 2, 00:08:02.253 "num_base_bdevs_discovered": 1, 00:08:02.253 "num_base_bdevs_operational": 2, 00:08:02.253 "base_bdevs_list": [ 00:08:02.253 { 00:08:02.253 "name": "BaseBdev1", 00:08:02.253 "uuid": "b308c275-5097-45b4-8928-9e61c5901158", 00:08:02.253 "is_configured": true, 00:08:02.253 "data_offset": 0, 00:08:02.253 "data_size": 65536 00:08:02.253 }, 00:08:02.253 { 00:08:02.253 "name": "BaseBdev2", 00:08:02.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.253 "is_configured": false, 00:08:02.253 "data_offset": 0, 00:08:02.253 "data_size": 0 00:08:02.253 } 00:08:02.253 ] 00:08:02.253 }' 00:08:02.254 18:54:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.254 18:54:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.822 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.823 [2024-11-26 18:54:54.043662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.823 [2024-11-26 18:54:54.043744] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.823 [2024-11-26 18:54:54.043758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:02.823 [2024-11-26 18:54:54.044111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.823 [2024-11-26 18:54:54.044381] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.823 [2024-11-26 18:54:54.044412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:02.823 [2024-11-26 18:54:54.044733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.823 BaseBdev2 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.823 [ 00:08:02.823 { 00:08:02.823 "name": "BaseBdev2", 00:08:02.823 "aliases": [ 00:08:02.823 "0fc6b105-795f-46d5-8e16-780ececc1383" 00:08:02.823 ], 00:08:02.823 "product_name": "Malloc disk", 00:08:02.823 "block_size": 512, 00:08:02.823 "num_blocks": 65536, 00:08:02.823 "uuid": "0fc6b105-795f-46d5-8e16-780ececc1383", 00:08:02.823 "assigned_rate_limits": { 00:08:02.823 "rw_ios_per_sec": 0, 00:08:02.823 "rw_mbytes_per_sec": 0, 00:08:02.823 "r_mbytes_per_sec": 0, 00:08:02.823 "w_mbytes_per_sec": 0 00:08:02.823 }, 00:08:02.823 "claimed": true, 00:08:02.823 "claim_type": "exclusive_write", 00:08:02.823 "zoned": false, 00:08:02.823 "supported_io_types": { 00:08:02.823 "read": true, 00:08:02.823 "write": true, 00:08:02.823 "unmap": true, 00:08:02.823 "flush": true, 00:08:02.823 "reset": true, 00:08:02.823 "nvme_admin": false, 00:08:02.823 "nvme_io": false, 00:08:02.823 "nvme_io_md": false, 00:08:02.823 "write_zeroes": true, 00:08:02.823 "zcopy": true, 00:08:02.823 "get_zone_info": false, 00:08:02.823 "zone_management": false, 00:08:02.823 "zone_append": false, 00:08:02.823 "compare": false, 00:08:02.823 "compare_and_write": false, 00:08:02.823 "abort": true, 00:08:02.823 "seek_hole": false, 00:08:02.823 "seek_data": false, 00:08:02.823 "copy": true, 00:08:02.823 "nvme_iov_md": false 00:08:02.823 }, 00:08:02.823 "memory_domains": [ 00:08:02.823 { 00:08:02.823 "dma_device_id": "system", 00:08:02.823 "dma_device_type": 1 00:08:02.823 }, 00:08:02.823 { 00:08:02.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.823 "dma_device_type": 2 00:08:02.823 } 00:08:02.823 ], 00:08:02.823 "driver_specific": {} 00:08:02.823 } 00:08:02.823 ] 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.823 "name": "Existed_Raid", 00:08:02.823 "uuid": "9c609f97-7b94-4ace-9893-02cb5effe427", 00:08:02.823 "strip_size_kb": 0, 00:08:02.823 "state": "online", 00:08:02.823 "raid_level": "raid1", 00:08:02.823 "superblock": false, 00:08:02.823 "num_base_bdevs": 2, 00:08:02.823 "num_base_bdevs_discovered": 2, 00:08:02.823 "num_base_bdevs_operational": 2, 00:08:02.823 "base_bdevs_list": [ 00:08:02.823 { 00:08:02.823 "name": "BaseBdev1", 00:08:02.823 "uuid": "b308c275-5097-45b4-8928-9e61c5901158", 00:08:02.823 "is_configured": true, 00:08:02.823 "data_offset": 0, 00:08:02.823 "data_size": 65536 00:08:02.823 }, 00:08:02.823 { 00:08:02.823 "name": "BaseBdev2", 00:08:02.823 "uuid": "0fc6b105-795f-46d5-8e16-780ececc1383", 00:08:02.823 "is_configured": true, 00:08:02.823 "data_offset": 0, 00:08:02.823 "data_size": 65536 00:08:02.823 } 00:08:02.823 ] 00:08:02.823 }' 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.823 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 [2024-11-26 18:54:54.596295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.394 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.394 "name": "Existed_Raid", 00:08:03.394 "aliases": [ 00:08:03.395 "9c609f97-7b94-4ace-9893-02cb5effe427" 00:08:03.395 ], 00:08:03.395 "product_name": "Raid Volume", 00:08:03.395 "block_size": 512, 00:08:03.395 "num_blocks": 65536, 00:08:03.395 "uuid": "9c609f97-7b94-4ace-9893-02cb5effe427", 00:08:03.395 "assigned_rate_limits": { 00:08:03.395 "rw_ios_per_sec": 0, 00:08:03.395 "rw_mbytes_per_sec": 0, 00:08:03.395 "r_mbytes_per_sec": 0, 00:08:03.395 "w_mbytes_per_sec": 0 00:08:03.395 }, 00:08:03.395 "claimed": false, 00:08:03.395 "zoned": false, 00:08:03.395 "supported_io_types": { 00:08:03.395 "read": true, 00:08:03.395 "write": true, 00:08:03.395 "unmap": false, 00:08:03.395 "flush": false, 00:08:03.395 "reset": true, 00:08:03.395 "nvme_admin": false, 00:08:03.395 "nvme_io": false, 00:08:03.395 "nvme_io_md": false, 00:08:03.395 "write_zeroes": true, 00:08:03.395 "zcopy": false, 00:08:03.395 "get_zone_info": false, 00:08:03.395 "zone_management": false, 00:08:03.395 "zone_append": false, 00:08:03.395 "compare": false, 00:08:03.395 "compare_and_write": false, 00:08:03.395 "abort": false, 00:08:03.395 "seek_hole": false, 00:08:03.395 "seek_data": false, 00:08:03.395 "copy": false, 00:08:03.395 "nvme_iov_md": false 00:08:03.395 }, 00:08:03.395 "memory_domains": [ 00:08:03.395 { 00:08:03.395 "dma_device_id": "system", 00:08:03.395 "dma_device_type": 1 00:08:03.395 }, 00:08:03.395 { 00:08:03.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.395 "dma_device_type": 2 00:08:03.395 }, 00:08:03.395 { 00:08:03.395 "dma_device_id": "system", 00:08:03.395 "dma_device_type": 1 00:08:03.395 }, 00:08:03.395 { 00:08:03.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.395 "dma_device_type": 2 00:08:03.395 } 00:08:03.395 ], 00:08:03.395 "driver_specific": { 00:08:03.395 "raid": { 00:08:03.395 "uuid": "9c609f97-7b94-4ace-9893-02cb5effe427", 00:08:03.395 "strip_size_kb": 0, 00:08:03.395 "state": "online", 00:08:03.395 "raid_level": "raid1", 00:08:03.395 "superblock": false, 00:08:03.395 "num_base_bdevs": 2, 00:08:03.395 "num_base_bdevs_discovered": 2, 00:08:03.395 "num_base_bdevs_operational": 2, 00:08:03.395 "base_bdevs_list": [ 00:08:03.395 { 00:08:03.395 "name": "BaseBdev1", 00:08:03.395 "uuid": "b308c275-5097-45b4-8928-9e61c5901158", 00:08:03.395 "is_configured": true, 00:08:03.395 "data_offset": 0, 00:08:03.395 "data_size": 65536 00:08:03.395 }, 00:08:03.395 { 00:08:03.395 "name": "BaseBdev2", 00:08:03.395 "uuid": "0fc6b105-795f-46d5-8e16-780ececc1383", 00:08:03.395 "is_configured": true, 00:08:03.395 "data_offset": 0, 00:08:03.395 "data_size": 65536 00:08:03.395 } 00:08:03.395 ] 00:08:03.395 } 00:08:03.395 } 00:08:03.395 }' 00:08:03.395 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.395 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:03.395 BaseBdev2' 00:08:03.395 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.395 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.395 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.395 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:03.395 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.395 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.395 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 [2024-11-26 18:54:54.872129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.654 18:54:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.913 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.913 "name": "Existed_Raid", 00:08:03.913 "uuid": "9c609f97-7b94-4ace-9893-02cb5effe427", 00:08:03.913 "strip_size_kb": 0, 00:08:03.913 "state": "online", 00:08:03.913 "raid_level": "raid1", 00:08:03.913 "superblock": false, 00:08:03.913 "num_base_bdevs": 2, 00:08:03.913 "num_base_bdevs_discovered": 1, 00:08:03.913 "num_base_bdevs_operational": 1, 00:08:03.913 "base_bdevs_list": [ 00:08:03.913 { 00:08:03.913 "name": null, 00:08:03.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.913 "is_configured": false, 00:08:03.913 "data_offset": 0, 00:08:03.913 "data_size": 65536 00:08:03.913 }, 00:08:03.913 { 00:08:03.913 "name": "BaseBdev2", 00:08:03.913 "uuid": "0fc6b105-795f-46d5-8e16-780ececc1383", 00:08:03.913 "is_configured": true, 00:08:03.913 "data_offset": 0, 00:08:03.913 "data_size": 65536 00:08:03.913 } 00:08:03.913 ] 00:08:03.913 }' 00:08:03.913 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.913 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.171 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:04.171 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.171 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.172 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:04.172 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.172 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.172 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.172 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:04.172 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.172 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:04.172 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.172 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.172 [2024-11-26 18:54:55.529291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.172 [2024-11-26 18:54:55.529434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.431 [2024-11-26 18:54:55.619910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.431 [2024-11-26 18:54:55.619988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.431 [2024-11-26 18:54:55.620013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62681 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62681 ']' 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62681 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62681 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.431 killing process with pid 62681 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62681' 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62681 00:08:04.431 [2024-11-26 18:54:55.715913] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.431 18:54:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62681 00:08:04.431 [2024-11-26 18:54:55.731119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.810 18:54:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:05.810 00:08:05.811 real 0m5.601s 00:08:05.811 user 0m8.460s 00:08:05.811 sys 0m0.797s 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.811 ************************************ 00:08:05.811 END TEST raid_state_function_test 00:08:05.811 ************************************ 00:08:05.811 18:54:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:05.811 18:54:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:05.811 18:54:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.811 18:54:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.811 ************************************ 00:08:05.811 START TEST raid_state_function_test_sb 00:08:05.811 ************************************ 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62940 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62940' 00:08:05.811 Process raid pid: 62940 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62940 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62940 ']' 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.811 18:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.811 [2024-11-26 18:54:56.992098] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:08:05.811 [2024-11-26 18:54:56.992285] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.070 [2024-11-26 18:54:57.184393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.070 [2024-11-26 18:54:57.357629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.329 [2024-11-26 18:54:57.617255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.329 [2024-11-26 18:54:57.617323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.938 [2024-11-26 18:54:58.069051] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.938 [2024-11-26 18:54:58.069120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.938 [2024-11-26 18:54:58.069138] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.938 [2024-11-26 18:54:58.069156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.938 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.938 "name": "Existed_Raid", 00:08:06.938 "uuid": "50a096ab-cbf2-43c1-bc74-8644328d73de", 00:08:06.938 "strip_size_kb": 0, 00:08:06.938 "state": "configuring", 00:08:06.938 "raid_level": "raid1", 00:08:06.938 "superblock": true, 00:08:06.939 "num_base_bdevs": 2, 00:08:06.939 "num_base_bdevs_discovered": 0, 00:08:06.939 "num_base_bdevs_operational": 2, 00:08:06.939 "base_bdevs_list": [ 00:08:06.939 { 00:08:06.939 "name": "BaseBdev1", 00:08:06.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.939 "is_configured": false, 00:08:06.939 "data_offset": 0, 00:08:06.939 "data_size": 0 00:08:06.939 }, 00:08:06.939 { 00:08:06.939 "name": "BaseBdev2", 00:08:06.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.939 "is_configured": false, 00:08:06.939 "data_offset": 0, 00:08:06.939 "data_size": 0 00:08:06.939 } 00:08:06.939 ] 00:08:06.939 }' 00:08:06.939 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.939 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.507 [2024-11-26 18:54:58.577127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.507 [2024-11-26 18:54:58.577177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.507 [2024-11-26 18:54:58.585083] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.507 [2024-11-26 18:54:58.585138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.507 [2024-11-26 18:54:58.585154] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.507 [2024-11-26 18:54:58.585173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.507 [2024-11-26 18:54:58.631847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.507 BaseBdev1 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.507 [ 00:08:07.507 { 00:08:07.507 "name": "BaseBdev1", 00:08:07.507 "aliases": [ 00:08:07.507 "6a453c5e-1a22-4948-ac55-e307d55046c5" 00:08:07.507 ], 00:08:07.507 "product_name": "Malloc disk", 00:08:07.507 "block_size": 512, 00:08:07.507 "num_blocks": 65536, 00:08:07.507 "uuid": "6a453c5e-1a22-4948-ac55-e307d55046c5", 00:08:07.507 "assigned_rate_limits": { 00:08:07.507 "rw_ios_per_sec": 0, 00:08:07.507 "rw_mbytes_per_sec": 0, 00:08:07.507 "r_mbytes_per_sec": 0, 00:08:07.507 "w_mbytes_per_sec": 0 00:08:07.507 }, 00:08:07.507 "claimed": true, 00:08:07.507 "claim_type": "exclusive_write", 00:08:07.507 "zoned": false, 00:08:07.507 "supported_io_types": { 00:08:07.507 "read": true, 00:08:07.507 "write": true, 00:08:07.507 "unmap": true, 00:08:07.507 "flush": true, 00:08:07.507 "reset": true, 00:08:07.507 "nvme_admin": false, 00:08:07.507 "nvme_io": false, 00:08:07.507 "nvme_io_md": false, 00:08:07.507 "write_zeroes": true, 00:08:07.507 "zcopy": true, 00:08:07.507 "get_zone_info": false, 00:08:07.507 "zone_management": false, 00:08:07.507 "zone_append": false, 00:08:07.507 "compare": false, 00:08:07.507 "compare_and_write": false, 00:08:07.507 "abort": true, 00:08:07.507 "seek_hole": false, 00:08:07.507 "seek_data": false, 00:08:07.507 "copy": true, 00:08:07.507 "nvme_iov_md": false 00:08:07.507 }, 00:08:07.507 "memory_domains": [ 00:08:07.507 { 00:08:07.507 "dma_device_id": "system", 00:08:07.507 "dma_device_type": 1 00:08:07.507 }, 00:08:07.507 { 00:08:07.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.507 "dma_device_type": 2 00:08:07.507 } 00:08:07.507 ], 00:08:07.507 "driver_specific": {} 00:08:07.507 } 00:08:07.507 ] 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.507 "name": "Existed_Raid", 00:08:07.507 "uuid": "8f5afb92-57b2-4c4f-9d3f-265d717415cd", 00:08:07.507 "strip_size_kb": 0, 00:08:07.507 "state": "configuring", 00:08:07.507 "raid_level": "raid1", 00:08:07.507 "superblock": true, 00:08:07.507 "num_base_bdevs": 2, 00:08:07.507 "num_base_bdevs_discovered": 1, 00:08:07.507 "num_base_bdevs_operational": 2, 00:08:07.507 "base_bdevs_list": [ 00:08:07.507 { 00:08:07.507 "name": "BaseBdev1", 00:08:07.507 "uuid": "6a453c5e-1a22-4948-ac55-e307d55046c5", 00:08:07.507 "is_configured": true, 00:08:07.507 "data_offset": 2048, 00:08:07.507 "data_size": 63488 00:08:07.507 }, 00:08:07.507 { 00:08:07.507 "name": "BaseBdev2", 00:08:07.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.507 "is_configured": false, 00:08:07.507 "data_offset": 0, 00:08:07.507 "data_size": 0 00:08:07.507 } 00:08:07.507 ] 00:08:07.507 }' 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.507 18:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.073 [2024-11-26 18:54:59.196122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.073 [2024-11-26 18:54:59.196387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.073 [2024-11-26 18:54:59.208160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.073 [2024-11-26 18:54:59.210823] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.073 [2024-11-26 18:54:59.211053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.073 "name": "Existed_Raid", 00:08:08.073 "uuid": "28b2272b-8f22-4f4b-b3da-409bb3d51149", 00:08:08.073 "strip_size_kb": 0, 00:08:08.073 "state": "configuring", 00:08:08.073 "raid_level": "raid1", 00:08:08.073 "superblock": true, 00:08:08.073 "num_base_bdevs": 2, 00:08:08.073 "num_base_bdevs_discovered": 1, 00:08:08.073 "num_base_bdevs_operational": 2, 00:08:08.073 "base_bdevs_list": [ 00:08:08.073 { 00:08:08.073 "name": "BaseBdev1", 00:08:08.073 "uuid": "6a453c5e-1a22-4948-ac55-e307d55046c5", 00:08:08.073 "is_configured": true, 00:08:08.073 "data_offset": 2048, 00:08:08.073 "data_size": 63488 00:08:08.073 }, 00:08:08.073 { 00:08:08.073 "name": "BaseBdev2", 00:08:08.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.073 "is_configured": false, 00:08:08.073 "data_offset": 0, 00:08:08.073 "data_size": 0 00:08:08.073 } 00:08:08.073 ] 00:08:08.073 }' 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.073 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.639 [2024-11-26 18:54:59.773647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.639 [2024-11-26 18:54:59.774048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:08.639 [2024-11-26 18:54:59.774069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:08.639 BaseBdev2 00:08:08.639 [2024-11-26 18:54:59.774409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:08.639 [2024-11-26 18:54:59.774632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:08.639 [2024-11-26 18:54:59.774658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:08.639 [2024-11-26 18:54:59.774838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.639 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.639 [ 00:08:08.639 { 00:08:08.639 "name": "BaseBdev2", 00:08:08.639 "aliases": [ 00:08:08.639 "7b1c9a22-3574-47fc-a87d-c37a932c67f6" 00:08:08.639 ], 00:08:08.639 "product_name": "Malloc disk", 00:08:08.639 "block_size": 512, 00:08:08.639 "num_blocks": 65536, 00:08:08.639 "uuid": "7b1c9a22-3574-47fc-a87d-c37a932c67f6", 00:08:08.639 "assigned_rate_limits": { 00:08:08.639 "rw_ios_per_sec": 0, 00:08:08.639 "rw_mbytes_per_sec": 0, 00:08:08.639 "r_mbytes_per_sec": 0, 00:08:08.639 "w_mbytes_per_sec": 0 00:08:08.639 }, 00:08:08.639 "claimed": true, 00:08:08.639 "claim_type": "exclusive_write", 00:08:08.639 "zoned": false, 00:08:08.639 "supported_io_types": { 00:08:08.639 "read": true, 00:08:08.639 "write": true, 00:08:08.639 "unmap": true, 00:08:08.639 "flush": true, 00:08:08.639 "reset": true, 00:08:08.639 "nvme_admin": false, 00:08:08.639 "nvme_io": false, 00:08:08.639 "nvme_io_md": false, 00:08:08.639 "write_zeroes": true, 00:08:08.639 "zcopy": true, 00:08:08.639 "get_zone_info": false, 00:08:08.639 "zone_management": false, 00:08:08.639 "zone_append": false, 00:08:08.639 "compare": false, 00:08:08.639 "compare_and_write": false, 00:08:08.639 "abort": true, 00:08:08.639 "seek_hole": false, 00:08:08.639 "seek_data": false, 00:08:08.639 "copy": true, 00:08:08.639 "nvme_iov_md": false 00:08:08.639 }, 00:08:08.639 "memory_domains": [ 00:08:08.639 { 00:08:08.639 "dma_device_id": "system", 00:08:08.640 "dma_device_type": 1 00:08:08.640 }, 00:08:08.640 { 00:08:08.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.640 "dma_device_type": 2 00:08:08.640 } 00:08:08.640 ], 00:08:08.640 "driver_specific": {} 00:08:08.640 } 00:08:08.640 ] 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.640 "name": "Existed_Raid", 00:08:08.640 "uuid": "28b2272b-8f22-4f4b-b3da-409bb3d51149", 00:08:08.640 "strip_size_kb": 0, 00:08:08.640 "state": "online", 00:08:08.640 "raid_level": "raid1", 00:08:08.640 "superblock": true, 00:08:08.640 "num_base_bdevs": 2, 00:08:08.640 "num_base_bdevs_discovered": 2, 00:08:08.640 "num_base_bdevs_operational": 2, 00:08:08.640 "base_bdevs_list": [ 00:08:08.640 { 00:08:08.640 "name": "BaseBdev1", 00:08:08.640 "uuid": "6a453c5e-1a22-4948-ac55-e307d55046c5", 00:08:08.640 "is_configured": true, 00:08:08.640 "data_offset": 2048, 00:08:08.640 "data_size": 63488 00:08:08.640 }, 00:08:08.640 { 00:08:08.640 "name": "BaseBdev2", 00:08:08.640 "uuid": "7b1c9a22-3574-47fc-a87d-c37a932c67f6", 00:08:08.640 "is_configured": true, 00:08:08.640 "data_offset": 2048, 00:08:08.640 "data_size": 63488 00:08:08.640 } 00:08:08.640 ] 00:08:08.640 }' 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.640 18:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.208 [2024-11-26 18:55:00.330248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.208 "name": "Existed_Raid", 00:08:09.208 "aliases": [ 00:08:09.208 "28b2272b-8f22-4f4b-b3da-409bb3d51149" 00:08:09.208 ], 00:08:09.208 "product_name": "Raid Volume", 00:08:09.208 "block_size": 512, 00:08:09.208 "num_blocks": 63488, 00:08:09.208 "uuid": "28b2272b-8f22-4f4b-b3da-409bb3d51149", 00:08:09.208 "assigned_rate_limits": { 00:08:09.208 "rw_ios_per_sec": 0, 00:08:09.208 "rw_mbytes_per_sec": 0, 00:08:09.208 "r_mbytes_per_sec": 0, 00:08:09.208 "w_mbytes_per_sec": 0 00:08:09.208 }, 00:08:09.208 "claimed": false, 00:08:09.208 "zoned": false, 00:08:09.208 "supported_io_types": { 00:08:09.208 "read": true, 00:08:09.208 "write": true, 00:08:09.208 "unmap": false, 00:08:09.208 "flush": false, 00:08:09.208 "reset": true, 00:08:09.208 "nvme_admin": false, 00:08:09.208 "nvme_io": false, 00:08:09.208 "nvme_io_md": false, 00:08:09.208 "write_zeroes": true, 00:08:09.208 "zcopy": false, 00:08:09.208 "get_zone_info": false, 00:08:09.208 "zone_management": false, 00:08:09.208 "zone_append": false, 00:08:09.208 "compare": false, 00:08:09.208 "compare_and_write": false, 00:08:09.208 "abort": false, 00:08:09.208 "seek_hole": false, 00:08:09.208 "seek_data": false, 00:08:09.208 "copy": false, 00:08:09.208 "nvme_iov_md": false 00:08:09.208 }, 00:08:09.208 "memory_domains": [ 00:08:09.208 { 00:08:09.208 "dma_device_id": "system", 00:08:09.208 "dma_device_type": 1 00:08:09.208 }, 00:08:09.208 { 00:08:09.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.208 "dma_device_type": 2 00:08:09.208 }, 00:08:09.208 { 00:08:09.208 "dma_device_id": "system", 00:08:09.208 "dma_device_type": 1 00:08:09.208 }, 00:08:09.208 { 00:08:09.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.208 "dma_device_type": 2 00:08:09.208 } 00:08:09.208 ], 00:08:09.208 "driver_specific": { 00:08:09.208 "raid": { 00:08:09.208 "uuid": "28b2272b-8f22-4f4b-b3da-409bb3d51149", 00:08:09.208 "strip_size_kb": 0, 00:08:09.208 "state": "online", 00:08:09.208 "raid_level": "raid1", 00:08:09.208 "superblock": true, 00:08:09.208 "num_base_bdevs": 2, 00:08:09.208 "num_base_bdevs_discovered": 2, 00:08:09.208 "num_base_bdevs_operational": 2, 00:08:09.208 "base_bdevs_list": [ 00:08:09.208 { 00:08:09.208 "name": "BaseBdev1", 00:08:09.208 "uuid": "6a453c5e-1a22-4948-ac55-e307d55046c5", 00:08:09.208 "is_configured": true, 00:08:09.208 "data_offset": 2048, 00:08:09.208 "data_size": 63488 00:08:09.208 }, 00:08:09.208 { 00:08:09.208 "name": "BaseBdev2", 00:08:09.208 "uuid": "7b1c9a22-3574-47fc-a87d-c37a932c67f6", 00:08:09.208 "is_configured": true, 00:08:09.208 "data_offset": 2048, 00:08:09.208 "data_size": 63488 00:08:09.208 } 00:08:09.208 ] 00:08:09.208 } 00:08:09.208 } 00:08:09.208 }' 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:09.208 BaseBdev2' 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:09.208 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.209 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.468 [2024-11-26 18:55:00.622066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.468 "name": "Existed_Raid", 00:08:09.468 "uuid": "28b2272b-8f22-4f4b-b3da-409bb3d51149", 00:08:09.468 "strip_size_kb": 0, 00:08:09.468 "state": "online", 00:08:09.468 "raid_level": "raid1", 00:08:09.468 "superblock": true, 00:08:09.468 "num_base_bdevs": 2, 00:08:09.468 "num_base_bdevs_discovered": 1, 00:08:09.468 "num_base_bdevs_operational": 1, 00:08:09.468 "base_bdevs_list": [ 00:08:09.468 { 00:08:09.468 "name": null, 00:08:09.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.468 "is_configured": false, 00:08:09.468 "data_offset": 0, 00:08:09.468 "data_size": 63488 00:08:09.468 }, 00:08:09.468 { 00:08:09.468 "name": "BaseBdev2", 00:08:09.468 "uuid": "7b1c9a22-3574-47fc-a87d-c37a932c67f6", 00:08:09.468 "is_configured": true, 00:08:09.468 "data_offset": 2048, 00:08:09.468 "data_size": 63488 00:08:09.468 } 00:08:09.468 ] 00:08:09.468 }' 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.468 18:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.036 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.036 [2024-11-26 18:55:01.337017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.036 [2024-11-26 18:55:01.337171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.294 [2024-11-26 18:55:01.429429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.294 [2024-11-26 18:55:01.429760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.294 [2024-11-26 18:55:01.430038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:10.294 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.294 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62940 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62940 ']' 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62940 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62940 00:08:10.295 killing process with pid 62940 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62940' 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62940 00:08:10.295 [2024-11-26 18:55:01.525720] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.295 18:55:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62940 00:08:10.295 [2024-11-26 18:55:01.541384] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.671 18:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:11.671 00:08:11.671 real 0m5.809s 00:08:11.671 user 0m8.762s 00:08:11.671 sys 0m0.808s 00:08:11.671 18:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.671 ************************************ 00:08:11.671 END TEST raid_state_function_test_sb 00:08:11.671 ************************************ 00:08:11.671 18:55:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.671 18:55:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:11.671 18:55:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:11.671 18:55:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.671 18:55:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.671 ************************************ 00:08:11.671 START TEST raid_superblock_test 00:08:11.671 ************************************ 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63196 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63196 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63196 ']' 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.671 18:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.671 [2024-11-26 18:55:02.872012] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:08:11.671 [2024-11-26 18:55:02.872443] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63196 ] 00:08:11.984 [2024-11-26 18:55:03.073322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.984 [2024-11-26 18:55:03.247707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.242 [2024-11-26 18:55:03.510681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.242 [2024-11-26 18:55:03.510771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.834 malloc1 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.834 [2024-11-26 18:55:04.065762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.834 [2024-11-26 18:55:04.065852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.834 [2024-11-26 18:55:04.065919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:12.834 [2024-11-26 18:55:04.065943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.834 [2024-11-26 18:55:04.069222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.834 [2024-11-26 18:55:04.069444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.834 pt1 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.834 malloc2 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.834 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.834 [2024-11-26 18:55:04.124625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:12.834 [2024-11-26 18:55:04.124728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.834 [2024-11-26 18:55:04.124777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:12.834 [2024-11-26 18:55:04.124793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.835 [2024-11-26 18:55:04.128148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.835 [2024-11-26 18:55:04.128196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:12.835 pt2 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.835 [2024-11-26 18:55:04.136994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.835 [2024-11-26 18:55:04.139852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:12.835 [2024-11-26 18:55:04.140119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:12.835 [2024-11-26 18:55:04.140151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:12.835 [2024-11-26 18:55:04.140510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.835 [2024-11-26 18:55:04.140748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:12.835 [2024-11-26 18:55:04.140783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:12.835 [2024-11-26 18:55:04.141070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.835 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.093 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.093 "name": "raid_bdev1", 00:08:13.093 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:13.093 "strip_size_kb": 0, 00:08:13.093 "state": "online", 00:08:13.093 "raid_level": "raid1", 00:08:13.093 "superblock": true, 00:08:13.093 "num_base_bdevs": 2, 00:08:13.093 "num_base_bdevs_discovered": 2, 00:08:13.093 "num_base_bdevs_operational": 2, 00:08:13.093 "base_bdevs_list": [ 00:08:13.093 { 00:08:13.093 "name": "pt1", 00:08:13.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.093 "is_configured": true, 00:08:13.093 "data_offset": 2048, 00:08:13.093 "data_size": 63488 00:08:13.093 }, 00:08:13.093 { 00:08:13.093 "name": "pt2", 00:08:13.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.093 "is_configured": true, 00:08:13.093 "data_offset": 2048, 00:08:13.093 "data_size": 63488 00:08:13.093 } 00:08:13.093 ] 00:08:13.093 }' 00:08:13.093 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.093 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.351 [2024-11-26 18:55:04.713580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.610 "name": "raid_bdev1", 00:08:13.610 "aliases": [ 00:08:13.610 "f1b0e7fa-6784-43f4-8182-0fe54b003909" 00:08:13.610 ], 00:08:13.610 "product_name": "Raid Volume", 00:08:13.610 "block_size": 512, 00:08:13.610 "num_blocks": 63488, 00:08:13.610 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:13.610 "assigned_rate_limits": { 00:08:13.610 "rw_ios_per_sec": 0, 00:08:13.610 "rw_mbytes_per_sec": 0, 00:08:13.610 "r_mbytes_per_sec": 0, 00:08:13.610 "w_mbytes_per_sec": 0 00:08:13.610 }, 00:08:13.610 "claimed": false, 00:08:13.610 "zoned": false, 00:08:13.610 "supported_io_types": { 00:08:13.610 "read": true, 00:08:13.610 "write": true, 00:08:13.610 "unmap": false, 00:08:13.610 "flush": false, 00:08:13.610 "reset": true, 00:08:13.610 "nvme_admin": false, 00:08:13.610 "nvme_io": false, 00:08:13.610 "nvme_io_md": false, 00:08:13.610 "write_zeroes": true, 00:08:13.610 "zcopy": false, 00:08:13.610 "get_zone_info": false, 00:08:13.610 "zone_management": false, 00:08:13.610 "zone_append": false, 00:08:13.610 "compare": false, 00:08:13.610 "compare_and_write": false, 00:08:13.610 "abort": false, 00:08:13.610 "seek_hole": false, 00:08:13.610 "seek_data": false, 00:08:13.610 "copy": false, 00:08:13.610 "nvme_iov_md": false 00:08:13.610 }, 00:08:13.610 "memory_domains": [ 00:08:13.610 { 00:08:13.610 "dma_device_id": "system", 00:08:13.610 "dma_device_type": 1 00:08:13.610 }, 00:08:13.610 { 00:08:13.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.610 "dma_device_type": 2 00:08:13.610 }, 00:08:13.610 { 00:08:13.610 "dma_device_id": "system", 00:08:13.610 "dma_device_type": 1 00:08:13.610 }, 00:08:13.610 { 00:08:13.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.610 "dma_device_type": 2 00:08:13.610 } 00:08:13.610 ], 00:08:13.610 "driver_specific": { 00:08:13.610 "raid": { 00:08:13.610 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:13.610 "strip_size_kb": 0, 00:08:13.610 "state": "online", 00:08:13.610 "raid_level": "raid1", 00:08:13.610 "superblock": true, 00:08:13.610 "num_base_bdevs": 2, 00:08:13.610 "num_base_bdevs_discovered": 2, 00:08:13.610 "num_base_bdevs_operational": 2, 00:08:13.610 "base_bdevs_list": [ 00:08:13.610 { 00:08:13.610 "name": "pt1", 00:08:13.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.610 "is_configured": true, 00:08:13.610 "data_offset": 2048, 00:08:13.610 "data_size": 63488 00:08:13.610 }, 00:08:13.610 { 00:08:13.610 "name": "pt2", 00:08:13.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.610 "is_configured": true, 00:08:13.610 "data_offset": 2048, 00:08:13.610 "data_size": 63488 00:08:13.610 } 00:08:13.610 ] 00:08:13.610 } 00:08:13.610 } 00:08:13.610 }' 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.610 pt2' 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.610 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.869 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.869 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.869 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.869 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.869 18:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.869 18:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:13.869 [2024-11-26 18:55:04.985615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.869 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.869 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f1b0e7fa-6784-43f4-8182-0fe54b003909 00:08:13.869 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f1b0e7fa-6784-43f4-8182-0fe54b003909 ']' 00:08:13.869 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.869 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.869 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.869 [2024-11-26 18:55:05.033220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.869 [2024-11-26 18:55:05.033252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.869 [2024-11-26 18:55:05.033362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.869 [2024-11-26 18:55:05.033456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.869 [2024-11-26 18:55:05.033475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:13.869 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 [2024-11-26 18:55:05.161378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:13.870 [2024-11-26 18:55:05.164091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:13.870 [2024-11-26 18:55:05.164195] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:13.870 [2024-11-26 18:55:05.164266] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:13.870 [2024-11-26 18:55:05.164292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.870 [2024-11-26 18:55:05.164308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:13.870 request: 00:08:13.870 { 00:08:13.870 "name": "raid_bdev1", 00:08:13.870 "raid_level": "raid1", 00:08:13.870 "base_bdevs": [ 00:08:13.870 "malloc1", 00:08:13.870 "malloc2" 00:08:13.870 ], 00:08:13.870 "superblock": false, 00:08:13.870 "method": "bdev_raid_create", 00:08:13.870 "req_id": 1 00:08:13.870 } 00:08:13.870 Got JSON-RPC error response 00:08:13.870 response: 00:08:13.870 { 00:08:13.870 "code": -17, 00:08:13.870 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:13.870 } 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 [2024-11-26 18:55:05.225369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.870 [2024-11-26 18:55:05.225479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.870 [2024-11-26 18:55:05.225511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:13.870 [2024-11-26 18:55:05.225529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.870 [2024-11-26 18:55:05.228685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.870 [2024-11-26 18:55:05.228744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.870 [2024-11-26 18:55:05.228861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:13.870 [2024-11-26 18:55:05.228980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.870 pt1 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.870 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.154 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.154 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.154 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.154 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.154 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.154 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.154 "name": "raid_bdev1", 00:08:14.154 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:14.154 "strip_size_kb": 0, 00:08:14.154 "state": "configuring", 00:08:14.154 "raid_level": "raid1", 00:08:14.154 "superblock": true, 00:08:14.154 "num_base_bdevs": 2, 00:08:14.154 "num_base_bdevs_discovered": 1, 00:08:14.154 "num_base_bdevs_operational": 2, 00:08:14.154 "base_bdevs_list": [ 00:08:14.154 { 00:08:14.154 "name": "pt1", 00:08:14.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.154 "is_configured": true, 00:08:14.154 "data_offset": 2048, 00:08:14.154 "data_size": 63488 00:08:14.154 }, 00:08:14.154 { 00:08:14.154 "name": null, 00:08:14.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.154 "is_configured": false, 00:08:14.154 "data_offset": 2048, 00:08:14.154 "data_size": 63488 00:08:14.154 } 00:08:14.154 ] 00:08:14.154 }' 00:08:14.154 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.154 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.414 [2024-11-26 18:55:05.733513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.414 [2024-11-26 18:55:05.733616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.414 [2024-11-26 18:55:05.733648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:14.414 [2024-11-26 18:55:05.733667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.414 [2024-11-26 18:55:05.734298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.414 [2024-11-26 18:55:05.734336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.414 [2024-11-26 18:55:05.734456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.414 [2024-11-26 18:55:05.734502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.414 [2024-11-26 18:55:05.734649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:14.414 [2024-11-26 18:55:05.734670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.414 [2024-11-26 18:55:05.735000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:14.414 [2024-11-26 18:55:05.735208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:14.414 [2024-11-26 18:55:05.735223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:14.414 [2024-11-26 18:55:05.735396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.414 pt2 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.414 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.672 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.672 "name": "raid_bdev1", 00:08:14.672 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:14.672 "strip_size_kb": 0, 00:08:14.672 "state": "online", 00:08:14.672 "raid_level": "raid1", 00:08:14.672 "superblock": true, 00:08:14.672 "num_base_bdevs": 2, 00:08:14.672 "num_base_bdevs_discovered": 2, 00:08:14.672 "num_base_bdevs_operational": 2, 00:08:14.672 "base_bdevs_list": [ 00:08:14.672 { 00:08:14.672 "name": "pt1", 00:08:14.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.672 "is_configured": true, 00:08:14.672 "data_offset": 2048, 00:08:14.672 "data_size": 63488 00:08:14.672 }, 00:08:14.672 { 00:08:14.672 "name": "pt2", 00:08:14.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.672 "is_configured": true, 00:08:14.672 "data_offset": 2048, 00:08:14.672 "data_size": 63488 00:08:14.672 } 00:08:14.672 ] 00:08:14.672 }' 00:08:14.672 18:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.672 18:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.939 [2024-11-26 18:55:06.254007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.939 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.939 "name": "raid_bdev1", 00:08:14.939 "aliases": [ 00:08:14.939 "f1b0e7fa-6784-43f4-8182-0fe54b003909" 00:08:14.939 ], 00:08:14.939 "product_name": "Raid Volume", 00:08:14.939 "block_size": 512, 00:08:14.939 "num_blocks": 63488, 00:08:14.939 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:14.939 "assigned_rate_limits": { 00:08:14.939 "rw_ios_per_sec": 0, 00:08:14.939 "rw_mbytes_per_sec": 0, 00:08:14.939 "r_mbytes_per_sec": 0, 00:08:14.939 "w_mbytes_per_sec": 0 00:08:14.939 }, 00:08:14.939 "claimed": false, 00:08:14.939 "zoned": false, 00:08:14.939 "supported_io_types": { 00:08:14.939 "read": true, 00:08:14.939 "write": true, 00:08:14.939 "unmap": false, 00:08:14.939 "flush": false, 00:08:14.939 "reset": true, 00:08:14.939 "nvme_admin": false, 00:08:14.939 "nvme_io": false, 00:08:14.939 "nvme_io_md": false, 00:08:14.939 "write_zeroes": true, 00:08:14.939 "zcopy": false, 00:08:14.939 "get_zone_info": false, 00:08:14.939 "zone_management": false, 00:08:14.939 "zone_append": false, 00:08:14.939 "compare": false, 00:08:14.939 "compare_and_write": false, 00:08:14.939 "abort": false, 00:08:14.939 "seek_hole": false, 00:08:14.939 "seek_data": false, 00:08:14.939 "copy": false, 00:08:14.939 "nvme_iov_md": false 00:08:14.939 }, 00:08:14.939 "memory_domains": [ 00:08:14.939 { 00:08:14.939 "dma_device_id": "system", 00:08:14.939 "dma_device_type": 1 00:08:14.939 }, 00:08:14.939 { 00:08:14.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.939 "dma_device_type": 2 00:08:14.939 }, 00:08:14.939 { 00:08:14.939 "dma_device_id": "system", 00:08:14.939 "dma_device_type": 1 00:08:14.939 }, 00:08:14.939 { 00:08:14.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.939 "dma_device_type": 2 00:08:14.939 } 00:08:14.939 ], 00:08:14.939 "driver_specific": { 00:08:14.939 "raid": { 00:08:14.939 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:14.939 "strip_size_kb": 0, 00:08:14.939 "state": "online", 00:08:14.939 "raid_level": "raid1", 00:08:14.939 "superblock": true, 00:08:14.939 "num_base_bdevs": 2, 00:08:14.939 "num_base_bdevs_discovered": 2, 00:08:14.939 "num_base_bdevs_operational": 2, 00:08:14.939 "base_bdevs_list": [ 00:08:14.939 { 00:08:14.939 "name": "pt1", 00:08:14.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.939 "is_configured": true, 00:08:14.939 "data_offset": 2048, 00:08:14.940 "data_size": 63488 00:08:14.940 }, 00:08:14.940 { 00:08:14.940 "name": "pt2", 00:08:14.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.940 "is_configured": true, 00:08:14.940 "data_offset": 2048, 00:08:14.940 "data_size": 63488 00:08:14.940 } 00:08:14.940 ] 00:08:14.940 } 00:08:14.940 } 00:08:14.940 }' 00:08:14.940 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.197 pt2' 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:15.197 [2024-11-26 18:55:06.505966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f1b0e7fa-6784-43f4-8182-0fe54b003909 '!=' f1b0e7fa-6784-43f4-8182-0fe54b003909 ']' 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.197 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.455 [2024-11-26 18:55:06.561834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.455 "name": "raid_bdev1", 00:08:15.455 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:15.455 "strip_size_kb": 0, 00:08:15.455 "state": "online", 00:08:15.455 "raid_level": "raid1", 00:08:15.455 "superblock": true, 00:08:15.455 "num_base_bdevs": 2, 00:08:15.455 "num_base_bdevs_discovered": 1, 00:08:15.455 "num_base_bdevs_operational": 1, 00:08:15.455 "base_bdevs_list": [ 00:08:15.455 { 00:08:15.455 "name": null, 00:08:15.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.455 "is_configured": false, 00:08:15.455 "data_offset": 0, 00:08:15.455 "data_size": 63488 00:08:15.455 }, 00:08:15.455 { 00:08:15.455 "name": "pt2", 00:08:15.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.455 "is_configured": true, 00:08:15.455 "data_offset": 2048, 00:08:15.455 "data_size": 63488 00:08:15.455 } 00:08:15.455 ] 00:08:15.455 }' 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.455 18:55:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 [2024-11-26 18:55:07.162006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.023 [2024-11-26 18:55:07.162048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.023 [2024-11-26 18:55:07.162160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.023 [2024-11-26 18:55:07.162230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.023 [2024-11-26 18:55:07.162249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 [2024-11-26 18:55:07.249948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.023 [2024-11-26 18:55:07.250046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.023 [2024-11-26 18:55:07.250072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:16.023 [2024-11-26 18:55:07.250090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.023 [2024-11-26 18:55:07.253255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.023 [2024-11-26 18:55:07.253333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.023 [2024-11-26 18:55:07.253440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:16.023 [2024-11-26 18:55:07.253504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.023 [2024-11-26 18:55:07.253633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:16.023 [2024-11-26 18:55:07.253662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:16.023 [2024-11-26 18:55:07.253965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:16.023 [2024-11-26 18:55:07.254170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:16.023 [2024-11-26 18:55:07.254196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:16.023 [2024-11-26 18:55:07.254421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.023 pt2 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.023 "name": "raid_bdev1", 00:08:16.023 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:16.023 "strip_size_kb": 0, 00:08:16.023 "state": "online", 00:08:16.023 "raid_level": "raid1", 00:08:16.023 "superblock": true, 00:08:16.023 "num_base_bdevs": 2, 00:08:16.023 "num_base_bdevs_discovered": 1, 00:08:16.023 "num_base_bdevs_operational": 1, 00:08:16.023 "base_bdevs_list": [ 00:08:16.023 { 00:08:16.023 "name": null, 00:08:16.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.023 "is_configured": false, 00:08:16.023 "data_offset": 2048, 00:08:16.023 "data_size": 63488 00:08:16.023 }, 00:08:16.023 { 00:08:16.023 "name": "pt2", 00:08:16.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.023 "is_configured": true, 00:08:16.023 "data_offset": 2048, 00:08:16.023 "data_size": 63488 00:08:16.023 } 00:08:16.023 ] 00:08:16.023 }' 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.023 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.588 [2024-11-26 18:55:07.770551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.588 [2024-11-26 18:55:07.770601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.588 [2024-11-26 18:55:07.770725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.588 [2024-11-26 18:55:07.770832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.588 [2024-11-26 18:55:07.770858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.588 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.588 [2024-11-26 18:55:07.818556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:16.588 [2024-11-26 18:55:07.818633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.588 [2024-11-26 18:55:07.818675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:16.588 [2024-11-26 18:55:07.818697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.588 [2024-11-26 18:55:07.822143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.588 [2024-11-26 18:55:07.822192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:16.588 [2024-11-26 18:55:07.822320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:16.588 [2024-11-26 18:55:07.822392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:16.588 [2024-11-26 18:55:07.822607] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:16.588 [2024-11-26 18:55:07.822637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.588 [2024-11-26 18:55:07.822666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:16.588 [2024-11-26 18:55:07.822746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.588 [2024-11-26 18:55:07.822946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:16.588 [2024-11-26 18:55:07.822975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:16.588 pt1 00:08:16.589 [2024-11-26 18:55:07.823366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:16.589 [2024-11-26 18:55:07.823597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:16.589 [2024-11-26 18:55:07.823624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:16.589 [2024-11-26 18:55:07.823847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.589 "name": "raid_bdev1", 00:08:16.589 "uuid": "f1b0e7fa-6784-43f4-8182-0fe54b003909", 00:08:16.589 "strip_size_kb": 0, 00:08:16.589 "state": "online", 00:08:16.589 "raid_level": "raid1", 00:08:16.589 "superblock": true, 00:08:16.589 "num_base_bdevs": 2, 00:08:16.589 "num_base_bdevs_discovered": 1, 00:08:16.589 "num_base_bdevs_operational": 1, 00:08:16.589 "base_bdevs_list": [ 00:08:16.589 { 00:08:16.589 "name": null, 00:08:16.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.589 "is_configured": false, 00:08:16.589 "data_offset": 2048, 00:08:16.589 "data_size": 63488 00:08:16.589 }, 00:08:16.589 { 00:08:16.589 "name": "pt2", 00:08:16.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.589 "is_configured": true, 00:08:16.589 "data_offset": 2048, 00:08:16.589 "data_size": 63488 00:08:16.589 } 00:08:16.589 ] 00:08:16.589 }' 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.589 18:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.154 [2024-11-26 18:55:08.387241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f1b0e7fa-6784-43f4-8182-0fe54b003909 '!=' f1b0e7fa-6784-43f4-8182-0fe54b003909 ']' 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63196 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63196 ']' 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63196 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63196 00:08:17.154 killing process with pid 63196 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63196' 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63196 00:08:17.154 [2024-11-26 18:55:08.460117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.154 18:55:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63196 00:08:17.154 [2024-11-26 18:55:08.460270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.154 [2024-11-26 18:55:08.460379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.154 [2024-11-26 18:55:08.460429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:17.413 [2024-11-26 18:55:08.658677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.788 18:55:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:18.788 00:08:18.788 real 0m6.985s 00:08:18.788 user 0m11.116s 00:08:18.788 sys 0m0.943s 00:08:18.788 18:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.788 18:55:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.788 ************************************ 00:08:18.788 END TEST raid_superblock_test 00:08:18.788 ************************************ 00:08:18.788 18:55:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:18.788 18:55:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:18.788 18:55:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.788 18:55:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.788 ************************************ 00:08:18.788 START TEST raid_read_error_test 00:08:18.788 ************************************ 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.30wJFDGsfl 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63534 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63534 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63534 ']' 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.788 18:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.788 [2024-11-26 18:55:09.898929] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:08:18.788 [2024-11-26 18:55:09.899117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63534 ] 00:08:18.788 [2024-11-26 18:55:10.079629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.047 [2024-11-26 18:55:10.212275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.306 [2024-11-26 18:55:10.420549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.306 [2024-11-26 18:55:10.420624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.565 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.565 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:19.565 18:55:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:19.565 18:55:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:19.565 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.565 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.825 BaseBdev1_malloc 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.825 true 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.825 [2024-11-26 18:55:10.952789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:19.825 [2024-11-26 18:55:10.952922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.825 [2024-11-26 18:55:10.952967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:19.825 [2024-11-26 18:55:10.952986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.825 [2024-11-26 18:55:10.956173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.825 [2024-11-26 18:55:10.956237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:19.825 BaseBdev1 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.825 BaseBdev2_malloc 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:19.825 18:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.825 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.825 true 00:08:19.825 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.825 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:19.825 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.825 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.825 [2024-11-26 18:55:11.019028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:19.825 [2024-11-26 18:55:11.019129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.825 [2024-11-26 18:55:11.019180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:19.825 [2024-11-26 18:55:11.019198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.825 [2024-11-26 18:55:11.022237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.826 [2024-11-26 18:55:11.022301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:19.826 BaseBdev2 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.826 [2024-11-26 18:55:11.031142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.826 [2024-11-26 18:55:11.033769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.826 [2024-11-26 18:55:11.034111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:19.826 [2024-11-26 18:55:11.034135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:19.826 [2024-11-26 18:55:11.034484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:19.826 [2024-11-26 18:55:11.034730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:19.826 [2024-11-26 18:55:11.034756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:19.826 [2024-11-26 18:55:11.035046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.826 "name": "raid_bdev1", 00:08:19.826 "uuid": "c3d9e9b9-faa7-4df1-9319-9c0d047dd86f", 00:08:19.826 "strip_size_kb": 0, 00:08:19.826 "state": "online", 00:08:19.826 "raid_level": "raid1", 00:08:19.826 "superblock": true, 00:08:19.826 "num_base_bdevs": 2, 00:08:19.826 "num_base_bdevs_discovered": 2, 00:08:19.826 "num_base_bdevs_operational": 2, 00:08:19.826 "base_bdevs_list": [ 00:08:19.826 { 00:08:19.826 "name": "BaseBdev1", 00:08:19.826 "uuid": "97bb8a00-319a-5763-a255-8b7c9ad3b4fa", 00:08:19.826 "is_configured": true, 00:08:19.826 "data_offset": 2048, 00:08:19.826 "data_size": 63488 00:08:19.826 }, 00:08:19.826 { 00:08:19.826 "name": "BaseBdev2", 00:08:19.826 "uuid": "2606d5f4-60d1-5892-a2a3-6afe1788b9f6", 00:08:19.826 "is_configured": true, 00:08:19.826 "data_offset": 2048, 00:08:19.826 "data_size": 63488 00:08:19.826 } 00:08:19.826 ] 00:08:19.826 }' 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.826 18:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.394 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:20.394 18:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:20.394 [2024-11-26 18:55:11.684753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.329 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.329 "name": "raid_bdev1", 00:08:21.329 "uuid": "c3d9e9b9-faa7-4df1-9319-9c0d047dd86f", 00:08:21.329 "strip_size_kb": 0, 00:08:21.329 "state": "online", 00:08:21.329 "raid_level": "raid1", 00:08:21.329 "superblock": true, 00:08:21.329 "num_base_bdevs": 2, 00:08:21.329 "num_base_bdevs_discovered": 2, 00:08:21.329 "num_base_bdevs_operational": 2, 00:08:21.329 "base_bdevs_list": [ 00:08:21.329 { 00:08:21.329 "name": "BaseBdev1", 00:08:21.329 "uuid": "97bb8a00-319a-5763-a255-8b7c9ad3b4fa", 00:08:21.329 "is_configured": true, 00:08:21.329 "data_offset": 2048, 00:08:21.329 "data_size": 63488 00:08:21.330 }, 00:08:21.330 { 00:08:21.330 "name": "BaseBdev2", 00:08:21.330 "uuid": "2606d5f4-60d1-5892-a2a3-6afe1788b9f6", 00:08:21.330 "is_configured": true, 00:08:21.330 "data_offset": 2048, 00:08:21.330 "data_size": 63488 00:08:21.330 } 00:08:21.330 ] 00:08:21.330 }' 00:08:21.330 18:55:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.330 18:55:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.898 [2024-11-26 18:55:13.123869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.898 [2024-11-26 18:55:13.123942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.898 { 00:08:21.898 "results": [ 00:08:21.898 { 00:08:21.898 "job": "raid_bdev1", 00:08:21.898 "core_mask": "0x1", 00:08:21.898 "workload": "randrw", 00:08:21.898 "percentage": 50, 00:08:21.898 "status": "finished", 00:08:21.898 "queue_depth": 1, 00:08:21.898 "io_size": 131072, 00:08:21.898 "runtime": 1.436465, 00:08:21.898 "iops": 11561.019586276032, 00:08:21.898 "mibps": 1445.127448284504, 00:08:21.898 "io_failed": 0, 00:08:21.898 "io_timeout": 0, 00:08:21.898 "avg_latency_us": 82.32465564904174, 00:08:21.898 "min_latency_us": 40.261818181818185, 00:08:21.898 "max_latency_us": 1906.5018181818182 00:08:21.898 } 00:08:21.898 ], 00:08:21.898 "core_count": 1 00:08:21.898 } 00:08:21.898 [2024-11-26 18:55:13.127348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.898 [2024-11-26 18:55:13.127410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.898 [2024-11-26 18:55:13.127520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.898 [2024-11-26 18:55:13.127541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63534 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63534 ']' 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63534 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63534 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.898 killing process with pid 63534 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63534' 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63534 00:08:21.898 [2024-11-26 18:55:13.165019] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.898 18:55:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63534 00:08:22.157 [2024-11-26 18:55:13.291710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.30wJFDGsfl 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:23.091 00:08:23.091 real 0m4.655s 00:08:23.091 user 0m5.816s 00:08:23.091 sys 0m0.578s 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.091 ************************************ 00:08:23.091 END TEST raid_read_error_test 00:08:23.091 ************************************ 00:08:23.091 18:55:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.351 18:55:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:23.351 18:55:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:23.351 18:55:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.351 18:55:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.351 ************************************ 00:08:23.351 START TEST raid_write_error_test 00:08:23.351 ************************************ 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tvSXTZJUDA 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63681 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63681 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63681 ']' 00:08:23.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.351 18:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.351 [2024-11-26 18:55:14.603149] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:08:23.351 [2024-11-26 18:55:14.603304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63681 ] 00:08:23.610 [2024-11-26 18:55:14.778180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.610 [2024-11-26 18:55:14.917883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.868 [2024-11-26 18:55:15.151605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.868 [2024-11-26 18:55:15.151696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.433 BaseBdev1_malloc 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.433 true 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.433 [2024-11-26 18:55:15.746589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:24.433 [2024-11-26 18:55:15.746681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.433 [2024-11-26 18:55:15.746723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:24.433 [2024-11-26 18:55:15.746751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.433 [2024-11-26 18:55:15.750597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.433 [2024-11-26 18:55:15.750838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:24.433 BaseBdev1 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.433 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 BaseBdev2_malloc 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 true 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 [2024-11-26 18:55:15.810426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:24.692 [2024-11-26 18:55:15.810501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.692 [2024-11-26 18:55:15.810527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:24.692 [2024-11-26 18:55:15.810544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.692 [2024-11-26 18:55:15.813383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.692 [2024-11-26 18:55:15.813565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:24.692 BaseBdev2 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 [2024-11-26 18:55:15.818518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.692 [2024-11-26 18:55:15.821077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.692 [2024-11-26 18:55:15.821340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:24.692 [2024-11-26 18:55:15.821365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:24.692 [2024-11-26 18:55:15.821689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:24.692 [2024-11-26 18:55:15.821938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:24.692 [2024-11-26 18:55:15.821956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:24.692 [2024-11-26 18:55:15.822147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.692 "name": "raid_bdev1", 00:08:24.692 "uuid": "058520cf-a40a-432a-a7df-e08657f66869", 00:08:24.692 "strip_size_kb": 0, 00:08:24.692 "state": "online", 00:08:24.692 "raid_level": "raid1", 00:08:24.692 "superblock": true, 00:08:24.692 "num_base_bdevs": 2, 00:08:24.692 "num_base_bdevs_discovered": 2, 00:08:24.692 "num_base_bdevs_operational": 2, 00:08:24.692 "base_bdevs_list": [ 00:08:24.692 { 00:08:24.692 "name": "BaseBdev1", 00:08:24.692 "uuid": "96685b5a-cd6a-5b65-bfbc-a39450edbaa3", 00:08:24.692 "is_configured": true, 00:08:24.692 "data_offset": 2048, 00:08:24.692 "data_size": 63488 00:08:24.692 }, 00:08:24.692 { 00:08:24.692 "name": "BaseBdev2", 00:08:24.692 "uuid": "37e2f09f-4190-5a79-897f-9c8bd49ae920", 00:08:24.692 "is_configured": true, 00:08:24.692 "data_offset": 2048, 00:08:24.692 "data_size": 63488 00:08:24.692 } 00:08:24.692 ] 00:08:24.692 }' 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.692 18:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.260 18:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:25.260 18:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:25.260 [2024-11-26 18:55:16.480176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.259 [2024-11-26 18:55:17.357825] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:26.259 [2024-11-26 18:55:17.358152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.259 [2024-11-26 18:55:17.358409] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.259 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.260 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.260 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.260 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.260 "name": "raid_bdev1", 00:08:26.260 "uuid": "058520cf-a40a-432a-a7df-e08657f66869", 00:08:26.260 "strip_size_kb": 0, 00:08:26.260 "state": "online", 00:08:26.260 "raid_level": "raid1", 00:08:26.260 "superblock": true, 00:08:26.260 "num_base_bdevs": 2, 00:08:26.260 "num_base_bdevs_discovered": 1, 00:08:26.260 "num_base_bdevs_operational": 1, 00:08:26.260 "base_bdevs_list": [ 00:08:26.260 { 00:08:26.260 "name": null, 00:08:26.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.260 "is_configured": false, 00:08:26.260 "data_offset": 0, 00:08:26.260 "data_size": 63488 00:08:26.260 }, 00:08:26.260 { 00:08:26.260 "name": "BaseBdev2", 00:08:26.260 "uuid": "37e2f09f-4190-5a79-897f-9c8bd49ae920", 00:08:26.260 "is_configured": true, 00:08:26.260 "data_offset": 2048, 00:08:26.260 "data_size": 63488 00:08:26.260 } 00:08:26.260 ] 00:08:26.260 }' 00:08:26.260 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.260 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.828 [2024-11-26 18:55:17.909724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.828 [2024-11-26 18:55:17.909759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.828 [2024-11-26 18:55:17.913347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.828 [2024-11-26 18:55:17.913410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.828 [2024-11-26 18:55:17.913490] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.828 [2024-11-26 18:55:17.913505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:26.828 { 00:08:26.828 "results": [ 00:08:26.828 { 00:08:26.828 "job": "raid_bdev1", 00:08:26.828 "core_mask": "0x1", 00:08:26.828 "workload": "randrw", 00:08:26.828 "percentage": 50, 00:08:26.828 "status": "finished", 00:08:26.828 "queue_depth": 1, 00:08:26.828 "io_size": 131072, 00:08:26.828 "runtime": 1.426799, 00:08:26.828 "iops": 13558.321809869505, 00:08:26.828 "mibps": 1694.7902262336881, 00:08:26.828 "io_failed": 0, 00:08:26.828 "io_timeout": 0, 00:08:26.828 "avg_latency_us": 69.64413261589793, 00:08:26.828 "min_latency_us": 37.93454545454546, 00:08:26.828 "max_latency_us": 1817.1345454545456 00:08:26.828 } 00:08:26.828 ], 00:08:26.828 "core_count": 1 00:08:26.828 } 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63681 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63681 ']' 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63681 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63681 00:08:26.828 killing process with pid 63681 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63681' 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63681 00:08:26.828 [2024-11-26 18:55:17.953570] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.828 18:55:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63681 00:08:26.828 [2024-11-26 18:55:18.073188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tvSXTZJUDA 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:28.205 ************************************ 00:08:28.205 END TEST raid_write_error_test 00:08:28.205 ************************************ 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:28.205 00:08:28.205 real 0m4.707s 00:08:28.205 user 0m5.973s 00:08:28.205 sys 0m0.584s 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.205 18:55:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.206 18:55:19 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:28.206 18:55:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:28.206 18:55:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:28.206 18:55:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:28.206 18:55:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.206 18:55:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.206 ************************************ 00:08:28.206 START TEST raid_state_function_test 00:08:28.206 ************************************ 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63830 00:08:28.206 Process raid pid: 63830 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63830' 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63830 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63830 ']' 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.206 18:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.206 [2024-11-26 18:55:19.380050] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:08:28.206 [2024-11-26 18:55:19.380240] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.465 [2024-11-26 18:55:19.569886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.465 [2024-11-26 18:55:19.705150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.725 [2024-11-26 18:55:19.918534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.725 [2024-11-26 18:55:19.918591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.347 [2024-11-26 18:55:20.393078] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.347 [2024-11-26 18:55:20.393173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.347 [2024-11-26 18:55:20.393191] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.347 [2024-11-26 18:55:20.393208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.347 [2024-11-26 18:55:20.393218] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.347 [2024-11-26 18:55:20.393232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.347 "name": "Existed_Raid", 00:08:29.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.347 "strip_size_kb": 64, 00:08:29.347 "state": "configuring", 00:08:29.347 "raid_level": "raid0", 00:08:29.347 "superblock": false, 00:08:29.347 "num_base_bdevs": 3, 00:08:29.347 "num_base_bdevs_discovered": 0, 00:08:29.347 "num_base_bdevs_operational": 3, 00:08:29.347 "base_bdevs_list": [ 00:08:29.347 { 00:08:29.347 "name": "BaseBdev1", 00:08:29.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.347 "is_configured": false, 00:08:29.347 "data_offset": 0, 00:08:29.347 "data_size": 0 00:08:29.347 }, 00:08:29.347 { 00:08:29.347 "name": "BaseBdev2", 00:08:29.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.347 "is_configured": false, 00:08:29.347 "data_offset": 0, 00:08:29.347 "data_size": 0 00:08:29.347 }, 00:08:29.347 { 00:08:29.347 "name": "BaseBdev3", 00:08:29.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.347 "is_configured": false, 00:08:29.347 "data_offset": 0, 00:08:29.347 "data_size": 0 00:08:29.347 } 00:08:29.347 ] 00:08:29.347 }' 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.347 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.607 [2024-11-26 18:55:20.921113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.607 [2024-11-26 18:55:20.921166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.607 [2024-11-26 18:55:20.933108] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.607 [2024-11-26 18:55:20.933164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.607 [2024-11-26 18:55:20.933180] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.607 [2024-11-26 18:55:20.933196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.607 [2024-11-26 18:55:20.933206] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.607 [2024-11-26 18:55:20.933221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.607 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 [2024-11-26 18:55:20.978448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.866 BaseBdev1 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.866 18:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 [ 00:08:29.866 { 00:08:29.866 "name": "BaseBdev1", 00:08:29.866 "aliases": [ 00:08:29.866 "c30695e5-96da-497b-b71f-b6d12cc997af" 00:08:29.866 ], 00:08:29.866 "product_name": "Malloc disk", 00:08:29.866 "block_size": 512, 00:08:29.866 "num_blocks": 65536, 00:08:29.866 "uuid": "c30695e5-96da-497b-b71f-b6d12cc997af", 00:08:29.866 "assigned_rate_limits": { 00:08:29.866 "rw_ios_per_sec": 0, 00:08:29.866 "rw_mbytes_per_sec": 0, 00:08:29.866 "r_mbytes_per_sec": 0, 00:08:29.866 "w_mbytes_per_sec": 0 00:08:29.866 }, 00:08:29.866 "claimed": true, 00:08:29.866 "claim_type": "exclusive_write", 00:08:29.866 "zoned": false, 00:08:29.866 "supported_io_types": { 00:08:29.866 "read": true, 00:08:29.866 "write": true, 00:08:29.866 "unmap": true, 00:08:29.866 "flush": true, 00:08:29.866 "reset": true, 00:08:29.866 "nvme_admin": false, 00:08:29.866 "nvme_io": false, 00:08:29.866 "nvme_io_md": false, 00:08:29.866 "write_zeroes": true, 00:08:29.866 "zcopy": true, 00:08:29.866 "get_zone_info": false, 00:08:29.866 "zone_management": false, 00:08:29.866 "zone_append": false, 00:08:29.866 "compare": false, 00:08:29.866 "compare_and_write": false, 00:08:29.866 "abort": true, 00:08:29.866 "seek_hole": false, 00:08:29.866 "seek_data": false, 00:08:29.866 "copy": true, 00:08:29.866 "nvme_iov_md": false 00:08:29.866 }, 00:08:29.866 "memory_domains": [ 00:08:29.866 { 00:08:29.866 "dma_device_id": "system", 00:08:29.866 "dma_device_type": 1 00:08:29.866 }, 00:08:29.866 { 00:08:29.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.866 "dma_device_type": 2 00:08:29.866 } 00:08:29.866 ], 00:08:29.866 "driver_specific": {} 00:08:29.866 } 00:08:29.866 ] 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.866 "name": "Existed_Raid", 00:08:29.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.866 "strip_size_kb": 64, 00:08:29.866 "state": "configuring", 00:08:29.866 "raid_level": "raid0", 00:08:29.866 "superblock": false, 00:08:29.866 "num_base_bdevs": 3, 00:08:29.866 "num_base_bdevs_discovered": 1, 00:08:29.866 "num_base_bdevs_operational": 3, 00:08:29.866 "base_bdevs_list": [ 00:08:29.866 { 00:08:29.866 "name": "BaseBdev1", 00:08:29.866 "uuid": "c30695e5-96da-497b-b71f-b6d12cc997af", 00:08:29.866 "is_configured": true, 00:08:29.866 "data_offset": 0, 00:08:29.866 "data_size": 65536 00:08:29.866 }, 00:08:29.866 { 00:08:29.866 "name": "BaseBdev2", 00:08:29.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.866 "is_configured": false, 00:08:29.866 "data_offset": 0, 00:08:29.866 "data_size": 0 00:08:29.866 }, 00:08:29.866 { 00:08:29.866 "name": "BaseBdev3", 00:08:29.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.866 "is_configured": false, 00:08:29.866 "data_offset": 0, 00:08:29.866 "data_size": 0 00:08:29.866 } 00:08:29.866 ] 00:08:29.866 }' 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.866 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.441 [2024-11-26 18:55:21.510656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.441 [2024-11-26 18:55:21.510736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.441 [2024-11-26 18:55:21.518709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.441 [2024-11-26 18:55:21.521297] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.441 [2024-11-26 18:55:21.521359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.441 [2024-11-26 18:55:21.521377] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.441 [2024-11-26 18:55:21.521392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.441 "name": "Existed_Raid", 00:08:30.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.441 "strip_size_kb": 64, 00:08:30.441 "state": "configuring", 00:08:30.441 "raid_level": "raid0", 00:08:30.441 "superblock": false, 00:08:30.441 "num_base_bdevs": 3, 00:08:30.441 "num_base_bdevs_discovered": 1, 00:08:30.441 "num_base_bdevs_operational": 3, 00:08:30.441 "base_bdevs_list": [ 00:08:30.441 { 00:08:30.441 "name": "BaseBdev1", 00:08:30.441 "uuid": "c30695e5-96da-497b-b71f-b6d12cc997af", 00:08:30.441 "is_configured": true, 00:08:30.441 "data_offset": 0, 00:08:30.441 "data_size": 65536 00:08:30.441 }, 00:08:30.441 { 00:08:30.441 "name": "BaseBdev2", 00:08:30.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.441 "is_configured": false, 00:08:30.441 "data_offset": 0, 00:08:30.441 "data_size": 0 00:08:30.441 }, 00:08:30.441 { 00:08:30.441 "name": "BaseBdev3", 00:08:30.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.441 "is_configured": false, 00:08:30.441 "data_offset": 0, 00:08:30.441 "data_size": 0 00:08:30.441 } 00:08:30.441 ] 00:08:30.441 }' 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.441 18:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.700 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.700 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.700 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.959 [2024-11-26 18:55:22.082683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.959 BaseBdev2 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.959 [ 00:08:30.959 { 00:08:30.959 "name": "BaseBdev2", 00:08:30.959 "aliases": [ 00:08:30.959 "c6195296-bedf-41ee-8bdf-2c49d149fecc" 00:08:30.959 ], 00:08:30.959 "product_name": "Malloc disk", 00:08:30.959 "block_size": 512, 00:08:30.959 "num_blocks": 65536, 00:08:30.959 "uuid": "c6195296-bedf-41ee-8bdf-2c49d149fecc", 00:08:30.959 "assigned_rate_limits": { 00:08:30.959 "rw_ios_per_sec": 0, 00:08:30.959 "rw_mbytes_per_sec": 0, 00:08:30.959 "r_mbytes_per_sec": 0, 00:08:30.959 "w_mbytes_per_sec": 0 00:08:30.959 }, 00:08:30.959 "claimed": true, 00:08:30.959 "claim_type": "exclusive_write", 00:08:30.959 "zoned": false, 00:08:30.959 "supported_io_types": { 00:08:30.959 "read": true, 00:08:30.959 "write": true, 00:08:30.959 "unmap": true, 00:08:30.959 "flush": true, 00:08:30.959 "reset": true, 00:08:30.959 "nvme_admin": false, 00:08:30.959 "nvme_io": false, 00:08:30.959 "nvme_io_md": false, 00:08:30.959 "write_zeroes": true, 00:08:30.959 "zcopy": true, 00:08:30.959 "get_zone_info": false, 00:08:30.959 "zone_management": false, 00:08:30.959 "zone_append": false, 00:08:30.959 "compare": false, 00:08:30.959 "compare_and_write": false, 00:08:30.959 "abort": true, 00:08:30.959 "seek_hole": false, 00:08:30.959 "seek_data": false, 00:08:30.959 "copy": true, 00:08:30.959 "nvme_iov_md": false 00:08:30.959 }, 00:08:30.959 "memory_domains": [ 00:08:30.959 { 00:08:30.959 "dma_device_id": "system", 00:08:30.959 "dma_device_type": 1 00:08:30.959 }, 00:08:30.959 { 00:08:30.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.959 "dma_device_type": 2 00:08:30.959 } 00:08:30.959 ], 00:08:30.959 "driver_specific": {} 00:08:30.959 } 00:08:30.959 ] 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.959 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.960 "name": "Existed_Raid", 00:08:30.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.960 "strip_size_kb": 64, 00:08:30.960 "state": "configuring", 00:08:30.960 "raid_level": "raid0", 00:08:30.960 "superblock": false, 00:08:30.960 "num_base_bdevs": 3, 00:08:30.960 "num_base_bdevs_discovered": 2, 00:08:30.960 "num_base_bdevs_operational": 3, 00:08:30.960 "base_bdevs_list": [ 00:08:30.960 { 00:08:30.960 "name": "BaseBdev1", 00:08:30.960 "uuid": "c30695e5-96da-497b-b71f-b6d12cc997af", 00:08:30.960 "is_configured": true, 00:08:30.960 "data_offset": 0, 00:08:30.960 "data_size": 65536 00:08:30.960 }, 00:08:30.960 { 00:08:30.960 "name": "BaseBdev2", 00:08:30.960 "uuid": "c6195296-bedf-41ee-8bdf-2c49d149fecc", 00:08:30.960 "is_configured": true, 00:08:30.960 "data_offset": 0, 00:08:30.960 "data_size": 65536 00:08:30.960 }, 00:08:30.960 { 00:08:30.960 "name": "BaseBdev3", 00:08:30.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.960 "is_configured": false, 00:08:30.960 "data_offset": 0, 00:08:30.960 "data_size": 0 00:08:30.960 } 00:08:30.960 ] 00:08:30.960 }' 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.960 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.526 [2024-11-26 18:55:22.731140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.526 [2024-11-26 18:55:22.731229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.526 [2024-11-26 18:55:22.731252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:31.526 [2024-11-26 18:55:22.731601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:31.526 [2024-11-26 18:55:22.731824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.526 [2024-11-26 18:55:22.731844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:31.526 [2024-11-26 18:55:22.732190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.526 BaseBdev3 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.526 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.526 [ 00:08:31.526 { 00:08:31.526 "name": "BaseBdev3", 00:08:31.526 "aliases": [ 00:08:31.526 "b2c9418d-81e7-4855-976f-a8e9daff7d51" 00:08:31.526 ], 00:08:31.526 "product_name": "Malloc disk", 00:08:31.526 "block_size": 512, 00:08:31.526 "num_blocks": 65536, 00:08:31.526 "uuid": "b2c9418d-81e7-4855-976f-a8e9daff7d51", 00:08:31.526 "assigned_rate_limits": { 00:08:31.526 "rw_ios_per_sec": 0, 00:08:31.526 "rw_mbytes_per_sec": 0, 00:08:31.526 "r_mbytes_per_sec": 0, 00:08:31.526 "w_mbytes_per_sec": 0 00:08:31.526 }, 00:08:31.526 "claimed": true, 00:08:31.526 "claim_type": "exclusive_write", 00:08:31.526 "zoned": false, 00:08:31.526 "supported_io_types": { 00:08:31.526 "read": true, 00:08:31.526 "write": true, 00:08:31.526 "unmap": true, 00:08:31.526 "flush": true, 00:08:31.526 "reset": true, 00:08:31.526 "nvme_admin": false, 00:08:31.526 "nvme_io": false, 00:08:31.526 "nvme_io_md": false, 00:08:31.526 "write_zeroes": true, 00:08:31.526 "zcopy": true, 00:08:31.526 "get_zone_info": false, 00:08:31.526 "zone_management": false, 00:08:31.526 "zone_append": false, 00:08:31.526 "compare": false, 00:08:31.526 "compare_and_write": false, 00:08:31.526 "abort": true, 00:08:31.526 "seek_hole": false, 00:08:31.526 "seek_data": false, 00:08:31.526 "copy": true, 00:08:31.526 "nvme_iov_md": false 00:08:31.526 }, 00:08:31.526 "memory_domains": [ 00:08:31.526 { 00:08:31.526 "dma_device_id": "system", 00:08:31.526 "dma_device_type": 1 00:08:31.526 }, 00:08:31.527 { 00:08:31.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.527 "dma_device_type": 2 00:08:31.527 } 00:08:31.527 ], 00:08:31.527 "driver_specific": {} 00:08:31.527 } 00:08:31.527 ] 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.527 "name": "Existed_Raid", 00:08:31.527 "uuid": "add745a3-b7df-4b6b-b1b9-99e4dc68b529", 00:08:31.527 "strip_size_kb": 64, 00:08:31.527 "state": "online", 00:08:31.527 "raid_level": "raid0", 00:08:31.527 "superblock": false, 00:08:31.527 "num_base_bdevs": 3, 00:08:31.527 "num_base_bdevs_discovered": 3, 00:08:31.527 "num_base_bdevs_operational": 3, 00:08:31.527 "base_bdevs_list": [ 00:08:31.527 { 00:08:31.527 "name": "BaseBdev1", 00:08:31.527 "uuid": "c30695e5-96da-497b-b71f-b6d12cc997af", 00:08:31.527 "is_configured": true, 00:08:31.527 "data_offset": 0, 00:08:31.527 "data_size": 65536 00:08:31.527 }, 00:08:31.527 { 00:08:31.527 "name": "BaseBdev2", 00:08:31.527 "uuid": "c6195296-bedf-41ee-8bdf-2c49d149fecc", 00:08:31.527 "is_configured": true, 00:08:31.527 "data_offset": 0, 00:08:31.527 "data_size": 65536 00:08:31.527 }, 00:08:31.527 { 00:08:31.527 "name": "BaseBdev3", 00:08:31.527 "uuid": "b2c9418d-81e7-4855-976f-a8e9daff7d51", 00:08:31.527 "is_configured": true, 00:08:31.527 "data_offset": 0, 00:08:31.527 "data_size": 65536 00:08:31.527 } 00:08:31.527 ] 00:08:31.527 }' 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.527 18:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 [2024-11-26 18:55:23.351956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.093 "name": "Existed_Raid", 00:08:32.093 "aliases": [ 00:08:32.093 "add745a3-b7df-4b6b-b1b9-99e4dc68b529" 00:08:32.093 ], 00:08:32.093 "product_name": "Raid Volume", 00:08:32.093 "block_size": 512, 00:08:32.093 "num_blocks": 196608, 00:08:32.093 "uuid": "add745a3-b7df-4b6b-b1b9-99e4dc68b529", 00:08:32.093 "assigned_rate_limits": { 00:08:32.093 "rw_ios_per_sec": 0, 00:08:32.093 "rw_mbytes_per_sec": 0, 00:08:32.093 "r_mbytes_per_sec": 0, 00:08:32.093 "w_mbytes_per_sec": 0 00:08:32.093 }, 00:08:32.093 "claimed": false, 00:08:32.093 "zoned": false, 00:08:32.093 "supported_io_types": { 00:08:32.093 "read": true, 00:08:32.093 "write": true, 00:08:32.093 "unmap": true, 00:08:32.093 "flush": true, 00:08:32.093 "reset": true, 00:08:32.093 "nvme_admin": false, 00:08:32.093 "nvme_io": false, 00:08:32.093 "nvme_io_md": false, 00:08:32.093 "write_zeroes": true, 00:08:32.093 "zcopy": false, 00:08:32.093 "get_zone_info": false, 00:08:32.093 "zone_management": false, 00:08:32.093 "zone_append": false, 00:08:32.093 "compare": false, 00:08:32.093 "compare_and_write": false, 00:08:32.093 "abort": false, 00:08:32.093 "seek_hole": false, 00:08:32.093 "seek_data": false, 00:08:32.093 "copy": false, 00:08:32.093 "nvme_iov_md": false 00:08:32.093 }, 00:08:32.093 "memory_domains": [ 00:08:32.093 { 00:08:32.093 "dma_device_id": "system", 00:08:32.093 "dma_device_type": 1 00:08:32.093 }, 00:08:32.093 { 00:08:32.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.093 "dma_device_type": 2 00:08:32.093 }, 00:08:32.093 { 00:08:32.093 "dma_device_id": "system", 00:08:32.093 "dma_device_type": 1 00:08:32.093 }, 00:08:32.093 { 00:08:32.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.093 "dma_device_type": 2 00:08:32.093 }, 00:08:32.093 { 00:08:32.093 "dma_device_id": "system", 00:08:32.093 "dma_device_type": 1 00:08:32.093 }, 00:08:32.093 { 00:08:32.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.093 "dma_device_type": 2 00:08:32.093 } 00:08:32.093 ], 00:08:32.093 "driver_specific": { 00:08:32.093 "raid": { 00:08:32.093 "uuid": "add745a3-b7df-4b6b-b1b9-99e4dc68b529", 00:08:32.093 "strip_size_kb": 64, 00:08:32.093 "state": "online", 00:08:32.093 "raid_level": "raid0", 00:08:32.093 "superblock": false, 00:08:32.093 "num_base_bdevs": 3, 00:08:32.093 "num_base_bdevs_discovered": 3, 00:08:32.093 "num_base_bdevs_operational": 3, 00:08:32.093 "base_bdevs_list": [ 00:08:32.093 { 00:08:32.093 "name": "BaseBdev1", 00:08:32.093 "uuid": "c30695e5-96da-497b-b71f-b6d12cc997af", 00:08:32.093 "is_configured": true, 00:08:32.093 "data_offset": 0, 00:08:32.093 "data_size": 65536 00:08:32.093 }, 00:08:32.093 { 00:08:32.093 "name": "BaseBdev2", 00:08:32.093 "uuid": "c6195296-bedf-41ee-8bdf-2c49d149fecc", 00:08:32.093 "is_configured": true, 00:08:32.093 "data_offset": 0, 00:08:32.093 "data_size": 65536 00:08:32.093 }, 00:08:32.093 { 00:08:32.093 "name": "BaseBdev3", 00:08:32.093 "uuid": "b2c9418d-81e7-4855-976f-a8e9daff7d51", 00:08:32.093 "is_configured": true, 00:08:32.093 "data_offset": 0, 00:08:32.093 "data_size": 65536 00:08:32.093 } 00:08:32.093 ] 00:08:32.093 } 00:08:32.093 } 00:08:32.093 }' 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:32.093 BaseBdev2 00:08:32.093 BaseBdev3' 00:08:32.093 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.351 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.351 [2024-11-26 18:55:23.695679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:32.351 [2024-11-26 18:55:23.695729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.351 [2024-11-26 18:55:23.695826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.609 "name": "Existed_Raid", 00:08:32.609 "uuid": "add745a3-b7df-4b6b-b1b9-99e4dc68b529", 00:08:32.609 "strip_size_kb": 64, 00:08:32.609 "state": "offline", 00:08:32.609 "raid_level": "raid0", 00:08:32.609 "superblock": false, 00:08:32.609 "num_base_bdevs": 3, 00:08:32.609 "num_base_bdevs_discovered": 2, 00:08:32.609 "num_base_bdevs_operational": 2, 00:08:32.609 "base_bdevs_list": [ 00:08:32.609 { 00:08:32.609 "name": null, 00:08:32.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.609 "is_configured": false, 00:08:32.609 "data_offset": 0, 00:08:32.609 "data_size": 65536 00:08:32.609 }, 00:08:32.609 { 00:08:32.609 "name": "BaseBdev2", 00:08:32.609 "uuid": "c6195296-bedf-41ee-8bdf-2c49d149fecc", 00:08:32.609 "is_configured": true, 00:08:32.609 "data_offset": 0, 00:08:32.609 "data_size": 65536 00:08:32.609 }, 00:08:32.609 { 00:08:32.609 "name": "BaseBdev3", 00:08:32.609 "uuid": "b2c9418d-81e7-4855-976f-a8e9daff7d51", 00:08:32.609 "is_configured": true, 00:08:32.609 "data_offset": 0, 00:08:32.609 "data_size": 65536 00:08:32.609 } 00:08:32.609 ] 00:08:32.609 }' 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.609 18:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 [2024-11-26 18:55:24.380701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.434 [2024-11-26 18:55:24.553539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:33.434 [2024-11-26 18:55:24.553661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.434 BaseBdev2 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.434 [ 00:08:33.434 { 00:08:33.434 "name": "BaseBdev2", 00:08:33.434 "aliases": [ 00:08:33.434 "247a76da-3a10-4f6b-b634-b8d15165a328" 00:08:33.434 ], 00:08:33.434 "product_name": "Malloc disk", 00:08:33.434 "block_size": 512, 00:08:33.434 "num_blocks": 65536, 00:08:33.434 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:33.434 "assigned_rate_limits": { 00:08:33.434 "rw_ios_per_sec": 0, 00:08:33.434 "rw_mbytes_per_sec": 0, 00:08:33.434 "r_mbytes_per_sec": 0, 00:08:33.434 "w_mbytes_per_sec": 0 00:08:33.434 }, 00:08:33.434 "claimed": false, 00:08:33.434 "zoned": false, 00:08:33.434 "supported_io_types": { 00:08:33.434 "read": true, 00:08:33.434 "write": true, 00:08:33.434 "unmap": true, 00:08:33.434 "flush": true, 00:08:33.434 "reset": true, 00:08:33.434 "nvme_admin": false, 00:08:33.434 "nvme_io": false, 00:08:33.434 "nvme_io_md": false, 00:08:33.434 "write_zeroes": true, 00:08:33.434 "zcopy": true, 00:08:33.434 "get_zone_info": false, 00:08:33.434 "zone_management": false, 00:08:33.434 "zone_append": false, 00:08:33.434 "compare": false, 00:08:33.434 "compare_and_write": false, 00:08:33.434 "abort": true, 00:08:33.434 "seek_hole": false, 00:08:33.434 "seek_data": false, 00:08:33.434 "copy": true, 00:08:33.434 "nvme_iov_md": false 00:08:33.434 }, 00:08:33.434 "memory_domains": [ 00:08:33.434 { 00:08:33.434 "dma_device_id": "system", 00:08:33.434 "dma_device_type": 1 00:08:33.434 }, 00:08:33.434 { 00:08:33.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.434 "dma_device_type": 2 00:08:33.434 } 00:08:33.434 ], 00:08:33.434 "driver_specific": {} 00:08:33.434 } 00:08:33.434 ] 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.434 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 BaseBdev3 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 [ 00:08:33.692 { 00:08:33.692 "name": "BaseBdev3", 00:08:33.692 "aliases": [ 00:08:33.692 "6ac5505b-36cf-4a7e-9391-ad4b7263c77c" 00:08:33.692 ], 00:08:33.692 "product_name": "Malloc disk", 00:08:33.692 "block_size": 512, 00:08:33.692 "num_blocks": 65536, 00:08:33.692 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:33.692 "assigned_rate_limits": { 00:08:33.692 "rw_ios_per_sec": 0, 00:08:33.692 "rw_mbytes_per_sec": 0, 00:08:33.692 "r_mbytes_per_sec": 0, 00:08:33.692 "w_mbytes_per_sec": 0 00:08:33.692 }, 00:08:33.692 "claimed": false, 00:08:33.692 "zoned": false, 00:08:33.692 "supported_io_types": { 00:08:33.692 "read": true, 00:08:33.692 "write": true, 00:08:33.692 "unmap": true, 00:08:33.692 "flush": true, 00:08:33.692 "reset": true, 00:08:33.692 "nvme_admin": false, 00:08:33.692 "nvme_io": false, 00:08:33.692 "nvme_io_md": false, 00:08:33.692 "write_zeroes": true, 00:08:33.692 "zcopy": true, 00:08:33.692 "get_zone_info": false, 00:08:33.692 "zone_management": false, 00:08:33.692 "zone_append": false, 00:08:33.692 "compare": false, 00:08:33.692 "compare_and_write": false, 00:08:33.692 "abort": true, 00:08:33.692 "seek_hole": false, 00:08:33.692 "seek_data": false, 00:08:33.692 "copy": true, 00:08:33.692 "nvme_iov_md": false 00:08:33.692 }, 00:08:33.692 "memory_domains": [ 00:08:33.692 { 00:08:33.692 "dma_device_id": "system", 00:08:33.692 "dma_device_type": 1 00:08:33.692 }, 00:08:33.692 { 00:08:33.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.692 "dma_device_type": 2 00:08:33.692 } 00:08:33.692 ], 00:08:33.692 "driver_specific": {} 00:08:33.692 } 00:08:33.692 ] 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.692 [2024-11-26 18:55:24.876557] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.692 [2024-11-26 18:55:24.876652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.692 [2024-11-26 18:55:24.876699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.692 [2024-11-26 18:55:24.879870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.692 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.693 "name": "Existed_Raid", 00:08:33.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.693 "strip_size_kb": 64, 00:08:33.693 "state": "configuring", 00:08:33.693 "raid_level": "raid0", 00:08:33.693 "superblock": false, 00:08:33.693 "num_base_bdevs": 3, 00:08:33.693 "num_base_bdevs_discovered": 2, 00:08:33.693 "num_base_bdevs_operational": 3, 00:08:33.693 "base_bdevs_list": [ 00:08:33.693 { 00:08:33.693 "name": "BaseBdev1", 00:08:33.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.693 "is_configured": false, 00:08:33.693 "data_offset": 0, 00:08:33.693 "data_size": 0 00:08:33.693 }, 00:08:33.693 { 00:08:33.693 "name": "BaseBdev2", 00:08:33.693 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:33.693 "is_configured": true, 00:08:33.693 "data_offset": 0, 00:08:33.693 "data_size": 65536 00:08:33.693 }, 00:08:33.693 { 00:08:33.693 "name": "BaseBdev3", 00:08:33.693 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:33.693 "is_configured": true, 00:08:33.693 "data_offset": 0, 00:08:33.693 "data_size": 65536 00:08:33.693 } 00:08:33.693 ] 00:08:33.693 }' 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.693 18:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.260 [2024-11-26 18:55:25.384732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.260 "name": "Existed_Raid", 00:08:34.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.260 "strip_size_kb": 64, 00:08:34.260 "state": "configuring", 00:08:34.260 "raid_level": "raid0", 00:08:34.260 "superblock": false, 00:08:34.260 "num_base_bdevs": 3, 00:08:34.260 "num_base_bdevs_discovered": 1, 00:08:34.260 "num_base_bdevs_operational": 3, 00:08:34.260 "base_bdevs_list": [ 00:08:34.260 { 00:08:34.260 "name": "BaseBdev1", 00:08:34.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.260 "is_configured": false, 00:08:34.260 "data_offset": 0, 00:08:34.260 "data_size": 0 00:08:34.260 }, 00:08:34.260 { 00:08:34.260 "name": null, 00:08:34.260 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:34.260 "is_configured": false, 00:08:34.260 "data_offset": 0, 00:08:34.260 "data_size": 65536 00:08:34.260 }, 00:08:34.260 { 00:08:34.260 "name": "BaseBdev3", 00:08:34.260 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:34.260 "is_configured": true, 00:08:34.260 "data_offset": 0, 00:08:34.260 "data_size": 65536 00:08:34.260 } 00:08:34.260 ] 00:08:34.260 }' 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.260 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.519 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.519 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.519 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.519 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:34.519 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.519 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:34.519 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:34.519 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.519 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.779 [2024-11-26 18:55:25.920570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.779 BaseBdev1 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.779 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.779 [ 00:08:34.779 { 00:08:34.779 "name": "BaseBdev1", 00:08:34.779 "aliases": [ 00:08:34.779 "29e4b32f-c19c-4092-bea5-b4d38687678c" 00:08:34.779 ], 00:08:34.779 "product_name": "Malloc disk", 00:08:34.779 "block_size": 512, 00:08:34.779 "num_blocks": 65536, 00:08:34.779 "uuid": "29e4b32f-c19c-4092-bea5-b4d38687678c", 00:08:34.779 "assigned_rate_limits": { 00:08:34.779 "rw_ios_per_sec": 0, 00:08:34.779 "rw_mbytes_per_sec": 0, 00:08:34.779 "r_mbytes_per_sec": 0, 00:08:34.779 "w_mbytes_per_sec": 0 00:08:34.779 }, 00:08:34.779 "claimed": true, 00:08:34.779 "claim_type": "exclusive_write", 00:08:34.779 "zoned": false, 00:08:34.779 "supported_io_types": { 00:08:34.779 "read": true, 00:08:34.779 "write": true, 00:08:34.779 "unmap": true, 00:08:34.779 "flush": true, 00:08:34.779 "reset": true, 00:08:34.779 "nvme_admin": false, 00:08:34.779 "nvme_io": false, 00:08:34.779 "nvme_io_md": false, 00:08:34.779 "write_zeroes": true, 00:08:34.779 "zcopy": true, 00:08:34.779 "get_zone_info": false, 00:08:34.779 "zone_management": false, 00:08:34.779 "zone_append": false, 00:08:34.779 "compare": false, 00:08:34.779 "compare_and_write": false, 00:08:34.779 "abort": true, 00:08:34.779 "seek_hole": false, 00:08:34.779 "seek_data": false, 00:08:34.779 "copy": true, 00:08:34.779 "nvme_iov_md": false 00:08:34.779 }, 00:08:34.779 "memory_domains": [ 00:08:34.779 { 00:08:34.779 "dma_device_id": "system", 00:08:34.779 "dma_device_type": 1 00:08:34.779 }, 00:08:34.779 { 00:08:34.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.779 "dma_device_type": 2 00:08:34.779 } 00:08:34.779 ], 00:08:34.779 "driver_specific": {} 00:08:34.779 } 00:08:34.779 ] 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.780 18:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.780 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.780 "name": "Existed_Raid", 00:08:34.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.780 "strip_size_kb": 64, 00:08:34.780 "state": "configuring", 00:08:34.780 "raid_level": "raid0", 00:08:34.780 "superblock": false, 00:08:34.780 "num_base_bdevs": 3, 00:08:34.780 "num_base_bdevs_discovered": 2, 00:08:34.780 "num_base_bdevs_operational": 3, 00:08:34.780 "base_bdevs_list": [ 00:08:34.780 { 00:08:34.780 "name": "BaseBdev1", 00:08:34.780 "uuid": "29e4b32f-c19c-4092-bea5-b4d38687678c", 00:08:34.780 "is_configured": true, 00:08:34.780 "data_offset": 0, 00:08:34.780 "data_size": 65536 00:08:34.780 }, 00:08:34.780 { 00:08:34.780 "name": null, 00:08:34.780 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:34.780 "is_configured": false, 00:08:34.780 "data_offset": 0, 00:08:34.780 "data_size": 65536 00:08:34.780 }, 00:08:34.780 { 00:08:34.780 "name": "BaseBdev3", 00:08:34.780 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:34.780 "is_configured": true, 00:08:34.780 "data_offset": 0, 00:08:34.780 "data_size": 65536 00:08:34.780 } 00:08:34.780 ] 00:08:34.780 }' 00:08:34.780 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.780 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.349 [2024-11-26 18:55:26.488789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.349 "name": "Existed_Raid", 00:08:35.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.349 "strip_size_kb": 64, 00:08:35.349 "state": "configuring", 00:08:35.349 "raid_level": "raid0", 00:08:35.349 "superblock": false, 00:08:35.349 "num_base_bdevs": 3, 00:08:35.349 "num_base_bdevs_discovered": 1, 00:08:35.349 "num_base_bdevs_operational": 3, 00:08:35.349 "base_bdevs_list": [ 00:08:35.349 { 00:08:35.349 "name": "BaseBdev1", 00:08:35.349 "uuid": "29e4b32f-c19c-4092-bea5-b4d38687678c", 00:08:35.349 "is_configured": true, 00:08:35.349 "data_offset": 0, 00:08:35.349 "data_size": 65536 00:08:35.349 }, 00:08:35.349 { 00:08:35.349 "name": null, 00:08:35.349 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:35.349 "is_configured": false, 00:08:35.349 "data_offset": 0, 00:08:35.349 "data_size": 65536 00:08:35.349 }, 00:08:35.349 { 00:08:35.349 "name": null, 00:08:35.349 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:35.349 "is_configured": false, 00:08:35.349 "data_offset": 0, 00:08:35.349 "data_size": 65536 00:08:35.349 } 00:08:35.349 ] 00:08:35.349 }' 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.349 18:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.916 [2024-11-26 18:55:27.069043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.916 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.917 "name": "Existed_Raid", 00:08:35.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.917 "strip_size_kb": 64, 00:08:35.917 "state": "configuring", 00:08:35.917 "raid_level": "raid0", 00:08:35.917 "superblock": false, 00:08:35.917 "num_base_bdevs": 3, 00:08:35.917 "num_base_bdevs_discovered": 2, 00:08:35.917 "num_base_bdevs_operational": 3, 00:08:35.917 "base_bdevs_list": [ 00:08:35.917 { 00:08:35.917 "name": "BaseBdev1", 00:08:35.917 "uuid": "29e4b32f-c19c-4092-bea5-b4d38687678c", 00:08:35.917 "is_configured": true, 00:08:35.917 "data_offset": 0, 00:08:35.917 "data_size": 65536 00:08:35.917 }, 00:08:35.917 { 00:08:35.917 "name": null, 00:08:35.917 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:35.917 "is_configured": false, 00:08:35.917 "data_offset": 0, 00:08:35.917 "data_size": 65536 00:08:35.917 }, 00:08:35.917 { 00:08:35.917 "name": "BaseBdev3", 00:08:35.917 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:35.917 "is_configured": true, 00:08:35.917 "data_offset": 0, 00:08:35.917 "data_size": 65536 00:08:35.917 } 00:08:35.917 ] 00:08:35.917 }' 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.917 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.484 [2024-11-26 18:55:27.669332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.484 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.485 "name": "Existed_Raid", 00:08:36.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.485 "strip_size_kb": 64, 00:08:36.485 "state": "configuring", 00:08:36.485 "raid_level": "raid0", 00:08:36.485 "superblock": false, 00:08:36.485 "num_base_bdevs": 3, 00:08:36.485 "num_base_bdevs_discovered": 1, 00:08:36.485 "num_base_bdevs_operational": 3, 00:08:36.485 "base_bdevs_list": [ 00:08:36.485 { 00:08:36.485 "name": null, 00:08:36.485 "uuid": "29e4b32f-c19c-4092-bea5-b4d38687678c", 00:08:36.485 "is_configured": false, 00:08:36.485 "data_offset": 0, 00:08:36.485 "data_size": 65536 00:08:36.485 }, 00:08:36.485 { 00:08:36.485 "name": null, 00:08:36.485 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:36.485 "is_configured": false, 00:08:36.485 "data_offset": 0, 00:08:36.485 "data_size": 65536 00:08:36.485 }, 00:08:36.485 { 00:08:36.485 "name": "BaseBdev3", 00:08:36.485 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:36.485 "is_configured": true, 00:08:36.485 "data_offset": 0, 00:08:36.485 "data_size": 65536 00:08:36.485 } 00:08:36.485 ] 00:08:36.485 }' 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.485 18:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.113 [2024-11-26 18:55:28.334907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.113 "name": "Existed_Raid", 00:08:37.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.113 "strip_size_kb": 64, 00:08:37.113 "state": "configuring", 00:08:37.113 "raid_level": "raid0", 00:08:37.113 "superblock": false, 00:08:37.113 "num_base_bdevs": 3, 00:08:37.113 "num_base_bdevs_discovered": 2, 00:08:37.113 "num_base_bdevs_operational": 3, 00:08:37.113 "base_bdevs_list": [ 00:08:37.113 { 00:08:37.113 "name": null, 00:08:37.113 "uuid": "29e4b32f-c19c-4092-bea5-b4d38687678c", 00:08:37.113 "is_configured": false, 00:08:37.113 "data_offset": 0, 00:08:37.113 "data_size": 65536 00:08:37.113 }, 00:08:37.113 { 00:08:37.113 "name": "BaseBdev2", 00:08:37.113 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:37.113 "is_configured": true, 00:08:37.113 "data_offset": 0, 00:08:37.113 "data_size": 65536 00:08:37.113 }, 00:08:37.113 { 00:08:37.113 "name": "BaseBdev3", 00:08:37.113 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:37.113 "is_configured": true, 00:08:37.113 "data_offset": 0, 00:08:37.113 "data_size": 65536 00:08:37.113 } 00:08:37.113 ] 00:08:37.113 }' 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.113 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 29e4b32f-c19c-4092-bea5-b4d38687678c 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.680 18:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 [2024-11-26 18:55:29.009819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:37.680 [2024-11-26 18:55:29.009894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:37.680 [2024-11-26 18:55:29.009910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:37.680 [2024-11-26 18:55:29.010277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:37.680 [2024-11-26 18:55:29.010474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:37.680 [2024-11-26 18:55:29.010499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:37.680 [2024-11-26 18:55:29.010818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.680 NewBaseBdev 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.680 [ 00:08:37.680 { 00:08:37.680 "name": "NewBaseBdev", 00:08:37.680 "aliases": [ 00:08:37.680 "29e4b32f-c19c-4092-bea5-b4d38687678c" 00:08:37.680 ], 00:08:37.680 "product_name": "Malloc disk", 00:08:37.680 "block_size": 512, 00:08:37.680 "num_blocks": 65536, 00:08:37.680 "uuid": "29e4b32f-c19c-4092-bea5-b4d38687678c", 00:08:37.680 "assigned_rate_limits": { 00:08:37.680 "rw_ios_per_sec": 0, 00:08:37.680 "rw_mbytes_per_sec": 0, 00:08:37.680 "r_mbytes_per_sec": 0, 00:08:37.680 "w_mbytes_per_sec": 0 00:08:37.680 }, 00:08:37.680 "claimed": true, 00:08:37.680 "claim_type": "exclusive_write", 00:08:37.680 "zoned": false, 00:08:37.680 "supported_io_types": { 00:08:37.680 "read": true, 00:08:37.680 "write": true, 00:08:37.680 "unmap": true, 00:08:37.680 "flush": true, 00:08:37.680 "reset": true, 00:08:37.680 "nvme_admin": false, 00:08:37.680 "nvme_io": false, 00:08:37.680 "nvme_io_md": false, 00:08:37.680 "write_zeroes": true, 00:08:37.680 "zcopy": true, 00:08:37.680 "get_zone_info": false, 00:08:37.680 "zone_management": false, 00:08:37.680 "zone_append": false, 00:08:37.680 "compare": false, 00:08:37.680 "compare_and_write": false, 00:08:37.680 "abort": true, 00:08:37.680 "seek_hole": false, 00:08:37.680 "seek_data": false, 00:08:37.680 "copy": true, 00:08:37.680 "nvme_iov_md": false 00:08:37.680 }, 00:08:37.680 "memory_domains": [ 00:08:37.680 { 00:08:37.680 "dma_device_id": "system", 00:08:37.680 "dma_device_type": 1 00:08:37.680 }, 00:08:37.680 { 00:08:37.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.680 "dma_device_type": 2 00:08:37.680 } 00:08:37.680 ], 00:08:37.680 "driver_specific": {} 00:08:37.680 } 00:08:37.680 ] 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.680 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.945 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.945 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.945 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.945 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.945 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.945 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.945 "name": "Existed_Raid", 00:08:37.945 "uuid": "fc2d1288-35e8-41f2-8c36-95c62f106f55", 00:08:37.945 "strip_size_kb": 64, 00:08:37.945 "state": "online", 00:08:37.945 "raid_level": "raid0", 00:08:37.945 "superblock": false, 00:08:37.945 "num_base_bdevs": 3, 00:08:37.945 "num_base_bdevs_discovered": 3, 00:08:37.945 "num_base_bdevs_operational": 3, 00:08:37.945 "base_bdevs_list": [ 00:08:37.945 { 00:08:37.945 "name": "NewBaseBdev", 00:08:37.945 "uuid": "29e4b32f-c19c-4092-bea5-b4d38687678c", 00:08:37.945 "is_configured": true, 00:08:37.945 "data_offset": 0, 00:08:37.945 "data_size": 65536 00:08:37.945 }, 00:08:37.945 { 00:08:37.945 "name": "BaseBdev2", 00:08:37.945 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:37.945 "is_configured": true, 00:08:37.945 "data_offset": 0, 00:08:37.945 "data_size": 65536 00:08:37.945 }, 00:08:37.945 { 00:08:37.945 "name": "BaseBdev3", 00:08:37.945 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:37.945 "is_configured": true, 00:08:37.945 "data_offset": 0, 00:08:37.945 "data_size": 65536 00:08:37.945 } 00:08:37.945 ] 00:08:37.945 }' 00:08:37.945 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.945 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.204 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.204 [2024-11-26 18:55:29.562433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.463 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.463 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.463 "name": "Existed_Raid", 00:08:38.463 "aliases": [ 00:08:38.463 "fc2d1288-35e8-41f2-8c36-95c62f106f55" 00:08:38.463 ], 00:08:38.463 "product_name": "Raid Volume", 00:08:38.463 "block_size": 512, 00:08:38.463 "num_blocks": 196608, 00:08:38.463 "uuid": "fc2d1288-35e8-41f2-8c36-95c62f106f55", 00:08:38.463 "assigned_rate_limits": { 00:08:38.463 "rw_ios_per_sec": 0, 00:08:38.463 "rw_mbytes_per_sec": 0, 00:08:38.463 "r_mbytes_per_sec": 0, 00:08:38.463 "w_mbytes_per_sec": 0 00:08:38.463 }, 00:08:38.463 "claimed": false, 00:08:38.463 "zoned": false, 00:08:38.463 "supported_io_types": { 00:08:38.463 "read": true, 00:08:38.463 "write": true, 00:08:38.463 "unmap": true, 00:08:38.463 "flush": true, 00:08:38.463 "reset": true, 00:08:38.463 "nvme_admin": false, 00:08:38.463 "nvme_io": false, 00:08:38.463 "nvme_io_md": false, 00:08:38.463 "write_zeroes": true, 00:08:38.463 "zcopy": false, 00:08:38.463 "get_zone_info": false, 00:08:38.463 "zone_management": false, 00:08:38.463 "zone_append": false, 00:08:38.463 "compare": false, 00:08:38.463 "compare_and_write": false, 00:08:38.463 "abort": false, 00:08:38.463 "seek_hole": false, 00:08:38.463 "seek_data": false, 00:08:38.463 "copy": false, 00:08:38.463 "nvme_iov_md": false 00:08:38.463 }, 00:08:38.463 "memory_domains": [ 00:08:38.463 { 00:08:38.463 "dma_device_id": "system", 00:08:38.463 "dma_device_type": 1 00:08:38.463 }, 00:08:38.463 { 00:08:38.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.463 "dma_device_type": 2 00:08:38.463 }, 00:08:38.463 { 00:08:38.463 "dma_device_id": "system", 00:08:38.463 "dma_device_type": 1 00:08:38.463 }, 00:08:38.463 { 00:08:38.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.463 "dma_device_type": 2 00:08:38.463 }, 00:08:38.463 { 00:08:38.463 "dma_device_id": "system", 00:08:38.463 "dma_device_type": 1 00:08:38.463 }, 00:08:38.463 { 00:08:38.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.463 "dma_device_type": 2 00:08:38.463 } 00:08:38.463 ], 00:08:38.463 "driver_specific": { 00:08:38.463 "raid": { 00:08:38.463 "uuid": "fc2d1288-35e8-41f2-8c36-95c62f106f55", 00:08:38.463 "strip_size_kb": 64, 00:08:38.463 "state": "online", 00:08:38.463 "raid_level": "raid0", 00:08:38.463 "superblock": false, 00:08:38.463 "num_base_bdevs": 3, 00:08:38.463 "num_base_bdevs_discovered": 3, 00:08:38.463 "num_base_bdevs_operational": 3, 00:08:38.463 "base_bdevs_list": [ 00:08:38.463 { 00:08:38.463 "name": "NewBaseBdev", 00:08:38.463 "uuid": "29e4b32f-c19c-4092-bea5-b4d38687678c", 00:08:38.463 "is_configured": true, 00:08:38.463 "data_offset": 0, 00:08:38.463 "data_size": 65536 00:08:38.463 }, 00:08:38.463 { 00:08:38.463 "name": "BaseBdev2", 00:08:38.463 "uuid": "247a76da-3a10-4f6b-b634-b8d15165a328", 00:08:38.463 "is_configured": true, 00:08:38.463 "data_offset": 0, 00:08:38.463 "data_size": 65536 00:08:38.463 }, 00:08:38.463 { 00:08:38.463 "name": "BaseBdev3", 00:08:38.463 "uuid": "6ac5505b-36cf-4a7e-9391-ad4b7263c77c", 00:08:38.463 "is_configured": true, 00:08:38.464 "data_offset": 0, 00:08:38.464 "data_size": 65536 00:08:38.464 } 00:08:38.464 ] 00:08:38.464 } 00:08:38.464 } 00:08:38.464 }' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:38.464 BaseBdev2 00:08:38.464 BaseBdev3' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.464 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.723 [2024-11-26 18:55:29.850144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.723 [2024-11-26 18:55:29.850182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.723 [2024-11-26 18:55:29.850301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.723 [2024-11-26 18:55:29.850395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.723 [2024-11-26 18:55:29.850416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63830 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63830 ']' 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63830 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63830 00:08:38.723 killing process with pid 63830 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63830' 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63830 00:08:38.723 [2024-11-26 18:55:29.888741] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.723 18:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63830 00:08:38.982 [2024-11-26 18:55:30.162317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.917 18:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:39.917 00:08:39.917 real 0m11.974s 00:08:39.917 user 0m19.660s 00:08:39.917 sys 0m1.706s 00:08:39.917 18:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.917 ************************************ 00:08:39.917 END TEST raid_state_function_test 00:08:39.917 ************************************ 00:08:39.917 18:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.201 18:55:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:40.201 18:55:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:40.201 18:55:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.201 18:55:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.201 ************************************ 00:08:40.201 START TEST raid_state_function_test_sb 00:08:40.201 ************************************ 00:08:40.201 18:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:40.201 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:40.201 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:40.201 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:40.201 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:40.201 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:40.201 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.201 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:40.202 Process raid pid: 64462 00:08:40.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64462 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64462' 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64462 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64462 ']' 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.202 18:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.202 [2024-11-26 18:55:31.402898] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:08:40.202 [2024-11-26 18:55:31.403353] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.477 [2024-11-26 18:55:31.584867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.477 [2024-11-26 18:55:31.721809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.739 [2024-11-26 18:55:31.953032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.739 [2024-11-26 18:55:31.953369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.998 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.998 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:40.998 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.998 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.998 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.259 [2024-11-26 18:55:32.362186] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.259 [2024-11-26 18:55:32.362272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.259 [2024-11-26 18:55:32.362294] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.259 [2024-11-26 18:55:32.362315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.259 [2024-11-26 18:55:32.362328] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:41.259 [2024-11-26 18:55:32.362347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.259 "name": "Existed_Raid", 00:08:41.259 "uuid": "ec5c74cf-cc45-442d-9e70-5e1a81dea6d9", 00:08:41.259 "strip_size_kb": 64, 00:08:41.259 "state": "configuring", 00:08:41.259 "raid_level": "raid0", 00:08:41.259 "superblock": true, 00:08:41.259 "num_base_bdevs": 3, 00:08:41.259 "num_base_bdevs_discovered": 0, 00:08:41.259 "num_base_bdevs_operational": 3, 00:08:41.259 "base_bdevs_list": [ 00:08:41.259 { 00:08:41.259 "name": "BaseBdev1", 00:08:41.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.259 "is_configured": false, 00:08:41.259 "data_offset": 0, 00:08:41.259 "data_size": 0 00:08:41.259 }, 00:08:41.259 { 00:08:41.259 "name": "BaseBdev2", 00:08:41.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.259 "is_configured": false, 00:08:41.259 "data_offset": 0, 00:08:41.259 "data_size": 0 00:08:41.259 }, 00:08:41.259 { 00:08:41.259 "name": "BaseBdev3", 00:08:41.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.259 "is_configured": false, 00:08:41.259 "data_offset": 0, 00:08:41.259 "data_size": 0 00:08:41.259 } 00:08:41.259 ] 00:08:41.259 }' 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.259 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.518 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.518 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.518 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.518 [2024-11-26 18:55:32.878250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.518 [2024-11-26 18:55:32.878314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.777 [2024-11-26 18:55:32.890292] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.777 [2024-11-26 18:55:32.890526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.777 [2024-11-26 18:55:32.890555] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.777 [2024-11-26 18:55:32.890574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.777 [2024-11-26 18:55:32.890584] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:41.777 [2024-11-26 18:55:32.890600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.777 [2024-11-26 18:55:32.938628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.777 BaseBdev1 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.777 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.777 [ 00:08:41.777 { 00:08:41.777 "name": "BaseBdev1", 00:08:41.777 "aliases": [ 00:08:41.777 "3220382a-e431-4ebb-89fe-d9752bf4dcdc" 00:08:41.777 ], 00:08:41.777 "product_name": "Malloc disk", 00:08:41.777 "block_size": 512, 00:08:41.777 "num_blocks": 65536, 00:08:41.777 "uuid": "3220382a-e431-4ebb-89fe-d9752bf4dcdc", 00:08:41.777 "assigned_rate_limits": { 00:08:41.777 "rw_ios_per_sec": 0, 00:08:41.777 "rw_mbytes_per_sec": 0, 00:08:41.777 "r_mbytes_per_sec": 0, 00:08:41.777 "w_mbytes_per_sec": 0 00:08:41.777 }, 00:08:41.777 "claimed": true, 00:08:41.777 "claim_type": "exclusive_write", 00:08:41.777 "zoned": false, 00:08:41.777 "supported_io_types": { 00:08:41.777 "read": true, 00:08:41.777 "write": true, 00:08:41.777 "unmap": true, 00:08:41.777 "flush": true, 00:08:41.777 "reset": true, 00:08:41.777 "nvme_admin": false, 00:08:41.777 "nvme_io": false, 00:08:41.777 "nvme_io_md": false, 00:08:41.777 "write_zeroes": true, 00:08:41.777 "zcopy": true, 00:08:41.777 "get_zone_info": false, 00:08:41.777 "zone_management": false, 00:08:41.777 "zone_append": false, 00:08:41.777 "compare": false, 00:08:41.778 "compare_and_write": false, 00:08:41.778 "abort": true, 00:08:41.778 "seek_hole": false, 00:08:41.778 "seek_data": false, 00:08:41.778 "copy": true, 00:08:41.778 "nvme_iov_md": false 00:08:41.778 }, 00:08:41.778 "memory_domains": [ 00:08:41.778 { 00:08:41.778 "dma_device_id": "system", 00:08:41.778 "dma_device_type": 1 00:08:41.778 }, 00:08:41.778 { 00:08:41.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.778 "dma_device_type": 2 00:08:41.778 } 00:08:41.778 ], 00:08:41.778 "driver_specific": {} 00:08:41.778 } 00:08:41.778 ] 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.778 18:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.778 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.778 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.778 "name": "Existed_Raid", 00:08:41.778 "uuid": "73237727-446e-4099-be2d-b886bc052e80", 00:08:41.778 "strip_size_kb": 64, 00:08:41.778 "state": "configuring", 00:08:41.778 "raid_level": "raid0", 00:08:41.778 "superblock": true, 00:08:41.778 "num_base_bdevs": 3, 00:08:41.778 "num_base_bdevs_discovered": 1, 00:08:41.778 "num_base_bdevs_operational": 3, 00:08:41.778 "base_bdevs_list": [ 00:08:41.778 { 00:08:41.778 "name": "BaseBdev1", 00:08:41.778 "uuid": "3220382a-e431-4ebb-89fe-d9752bf4dcdc", 00:08:41.778 "is_configured": true, 00:08:41.778 "data_offset": 2048, 00:08:41.778 "data_size": 63488 00:08:41.778 }, 00:08:41.778 { 00:08:41.778 "name": "BaseBdev2", 00:08:41.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.778 "is_configured": false, 00:08:41.778 "data_offset": 0, 00:08:41.778 "data_size": 0 00:08:41.778 }, 00:08:41.778 { 00:08:41.778 "name": "BaseBdev3", 00:08:41.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.778 "is_configured": false, 00:08:41.778 "data_offset": 0, 00:08:41.778 "data_size": 0 00:08:41.778 } 00:08:41.778 ] 00:08:41.778 }' 00:08:41.778 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.778 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.345 [2024-11-26 18:55:33.486827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.345 [2024-11-26 18:55:33.487054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.345 [2024-11-26 18:55:33.498880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.345 [2024-11-26 18:55:33.501400] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.345 [2024-11-26 18:55:33.501602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.345 [2024-11-26 18:55:33.501631] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.345 [2024-11-26 18:55:33.501649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.345 "name": "Existed_Raid", 00:08:42.345 "uuid": "2b57e0d5-758b-4294-849c-ef9e5e840149", 00:08:42.345 "strip_size_kb": 64, 00:08:42.345 "state": "configuring", 00:08:42.345 "raid_level": "raid0", 00:08:42.345 "superblock": true, 00:08:42.345 "num_base_bdevs": 3, 00:08:42.345 "num_base_bdevs_discovered": 1, 00:08:42.345 "num_base_bdevs_operational": 3, 00:08:42.345 "base_bdevs_list": [ 00:08:42.345 { 00:08:42.345 "name": "BaseBdev1", 00:08:42.345 "uuid": "3220382a-e431-4ebb-89fe-d9752bf4dcdc", 00:08:42.345 "is_configured": true, 00:08:42.345 "data_offset": 2048, 00:08:42.345 "data_size": 63488 00:08:42.345 }, 00:08:42.345 { 00:08:42.345 "name": "BaseBdev2", 00:08:42.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.345 "is_configured": false, 00:08:42.345 "data_offset": 0, 00:08:42.345 "data_size": 0 00:08:42.345 }, 00:08:42.345 { 00:08:42.345 "name": "BaseBdev3", 00:08:42.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.345 "is_configured": false, 00:08:42.345 "data_offset": 0, 00:08:42.345 "data_size": 0 00:08:42.345 } 00:08:42.345 ] 00:08:42.345 }' 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.345 18:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 [2024-11-26 18:55:34.082238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.915 BaseBdev2 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 [ 00:08:42.915 { 00:08:42.915 "name": "BaseBdev2", 00:08:42.915 "aliases": [ 00:08:42.915 "601142a5-1594-491a-99ab-a79df38c102f" 00:08:42.915 ], 00:08:42.915 "product_name": "Malloc disk", 00:08:42.915 "block_size": 512, 00:08:42.915 "num_blocks": 65536, 00:08:42.915 "uuid": "601142a5-1594-491a-99ab-a79df38c102f", 00:08:42.915 "assigned_rate_limits": { 00:08:42.915 "rw_ios_per_sec": 0, 00:08:42.915 "rw_mbytes_per_sec": 0, 00:08:42.915 "r_mbytes_per_sec": 0, 00:08:42.915 "w_mbytes_per_sec": 0 00:08:42.915 }, 00:08:42.915 "claimed": true, 00:08:42.915 "claim_type": "exclusive_write", 00:08:42.915 "zoned": false, 00:08:42.915 "supported_io_types": { 00:08:42.915 "read": true, 00:08:42.915 "write": true, 00:08:42.915 "unmap": true, 00:08:42.915 "flush": true, 00:08:42.915 "reset": true, 00:08:42.915 "nvme_admin": false, 00:08:42.915 "nvme_io": false, 00:08:42.915 "nvme_io_md": false, 00:08:42.915 "write_zeroes": true, 00:08:42.915 "zcopy": true, 00:08:42.915 "get_zone_info": false, 00:08:42.915 "zone_management": false, 00:08:42.915 "zone_append": false, 00:08:42.915 "compare": false, 00:08:42.915 "compare_and_write": false, 00:08:42.915 "abort": true, 00:08:42.915 "seek_hole": false, 00:08:42.915 "seek_data": false, 00:08:42.915 "copy": true, 00:08:42.915 "nvme_iov_md": false 00:08:42.915 }, 00:08:42.915 "memory_domains": [ 00:08:42.915 { 00:08:42.915 "dma_device_id": "system", 00:08:42.915 "dma_device_type": 1 00:08:42.915 }, 00:08:42.915 { 00:08:42.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.915 "dma_device_type": 2 00:08:42.915 } 00:08:42.915 ], 00:08:42.915 "driver_specific": {} 00:08:42.915 } 00:08:42.915 ] 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.915 "name": "Existed_Raid", 00:08:42.915 "uuid": "2b57e0d5-758b-4294-849c-ef9e5e840149", 00:08:42.915 "strip_size_kb": 64, 00:08:42.915 "state": "configuring", 00:08:42.915 "raid_level": "raid0", 00:08:42.915 "superblock": true, 00:08:42.915 "num_base_bdevs": 3, 00:08:42.915 "num_base_bdevs_discovered": 2, 00:08:42.915 "num_base_bdevs_operational": 3, 00:08:42.915 "base_bdevs_list": [ 00:08:42.915 { 00:08:42.915 "name": "BaseBdev1", 00:08:42.915 "uuid": "3220382a-e431-4ebb-89fe-d9752bf4dcdc", 00:08:42.915 "is_configured": true, 00:08:42.915 "data_offset": 2048, 00:08:42.915 "data_size": 63488 00:08:42.915 }, 00:08:42.915 { 00:08:42.915 "name": "BaseBdev2", 00:08:42.915 "uuid": "601142a5-1594-491a-99ab-a79df38c102f", 00:08:42.915 "is_configured": true, 00:08:42.915 "data_offset": 2048, 00:08:42.915 "data_size": 63488 00:08:42.915 }, 00:08:42.915 { 00:08:42.915 "name": "BaseBdev3", 00:08:42.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.915 "is_configured": false, 00:08:42.915 "data_offset": 0, 00:08:42.915 "data_size": 0 00:08:42.915 } 00:08:42.915 ] 00:08:42.915 }' 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.915 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.484 [2024-11-26 18:55:34.662869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.484 [2024-11-26 18:55:34.663289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.484 [2024-11-26 18:55:34.663321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:43.484 [2024-11-26 18:55:34.663689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:43.484 [2024-11-26 18:55:34.663937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.484 [2024-11-26 18:55:34.663957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:43.484 BaseBdev3 00:08:43.484 [2024-11-26 18:55:34.664159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.484 [ 00:08:43.484 { 00:08:43.484 "name": "BaseBdev3", 00:08:43.484 "aliases": [ 00:08:43.484 "a81e150e-5272-4e42-831b-817f07867f75" 00:08:43.484 ], 00:08:43.484 "product_name": "Malloc disk", 00:08:43.484 "block_size": 512, 00:08:43.484 "num_blocks": 65536, 00:08:43.484 "uuid": "a81e150e-5272-4e42-831b-817f07867f75", 00:08:43.484 "assigned_rate_limits": { 00:08:43.484 "rw_ios_per_sec": 0, 00:08:43.484 "rw_mbytes_per_sec": 0, 00:08:43.484 "r_mbytes_per_sec": 0, 00:08:43.484 "w_mbytes_per_sec": 0 00:08:43.484 }, 00:08:43.484 "claimed": true, 00:08:43.484 "claim_type": "exclusive_write", 00:08:43.484 "zoned": false, 00:08:43.484 "supported_io_types": { 00:08:43.484 "read": true, 00:08:43.484 "write": true, 00:08:43.484 "unmap": true, 00:08:43.484 "flush": true, 00:08:43.484 "reset": true, 00:08:43.484 "nvme_admin": false, 00:08:43.484 "nvme_io": false, 00:08:43.484 "nvme_io_md": false, 00:08:43.484 "write_zeroes": true, 00:08:43.484 "zcopy": true, 00:08:43.484 "get_zone_info": false, 00:08:43.484 "zone_management": false, 00:08:43.484 "zone_append": false, 00:08:43.484 "compare": false, 00:08:43.484 "compare_and_write": false, 00:08:43.484 "abort": true, 00:08:43.484 "seek_hole": false, 00:08:43.484 "seek_data": false, 00:08:43.484 "copy": true, 00:08:43.484 "nvme_iov_md": false 00:08:43.484 }, 00:08:43.484 "memory_domains": [ 00:08:43.484 { 00:08:43.484 "dma_device_id": "system", 00:08:43.484 "dma_device_type": 1 00:08:43.484 }, 00:08:43.484 { 00:08:43.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.484 "dma_device_type": 2 00:08:43.484 } 00:08:43.484 ], 00:08:43.484 "driver_specific": {} 00:08:43.484 } 00:08:43.484 ] 00:08:43.484 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.485 "name": "Existed_Raid", 00:08:43.485 "uuid": "2b57e0d5-758b-4294-849c-ef9e5e840149", 00:08:43.485 "strip_size_kb": 64, 00:08:43.485 "state": "online", 00:08:43.485 "raid_level": "raid0", 00:08:43.485 "superblock": true, 00:08:43.485 "num_base_bdevs": 3, 00:08:43.485 "num_base_bdevs_discovered": 3, 00:08:43.485 "num_base_bdevs_operational": 3, 00:08:43.485 "base_bdevs_list": [ 00:08:43.485 { 00:08:43.485 "name": "BaseBdev1", 00:08:43.485 "uuid": "3220382a-e431-4ebb-89fe-d9752bf4dcdc", 00:08:43.485 "is_configured": true, 00:08:43.485 "data_offset": 2048, 00:08:43.485 "data_size": 63488 00:08:43.485 }, 00:08:43.485 { 00:08:43.485 "name": "BaseBdev2", 00:08:43.485 "uuid": "601142a5-1594-491a-99ab-a79df38c102f", 00:08:43.485 "is_configured": true, 00:08:43.485 "data_offset": 2048, 00:08:43.485 "data_size": 63488 00:08:43.485 }, 00:08:43.485 { 00:08:43.485 "name": "BaseBdev3", 00:08:43.485 "uuid": "a81e150e-5272-4e42-831b-817f07867f75", 00:08:43.485 "is_configured": true, 00:08:43.485 "data_offset": 2048, 00:08:43.485 "data_size": 63488 00:08:43.485 } 00:08:43.485 ] 00:08:43.485 }' 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.485 18:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.054 [2024-11-26 18:55:35.199474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.054 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.054 "name": "Existed_Raid", 00:08:44.054 "aliases": [ 00:08:44.054 "2b57e0d5-758b-4294-849c-ef9e5e840149" 00:08:44.054 ], 00:08:44.054 "product_name": "Raid Volume", 00:08:44.054 "block_size": 512, 00:08:44.054 "num_blocks": 190464, 00:08:44.054 "uuid": "2b57e0d5-758b-4294-849c-ef9e5e840149", 00:08:44.054 "assigned_rate_limits": { 00:08:44.054 "rw_ios_per_sec": 0, 00:08:44.054 "rw_mbytes_per_sec": 0, 00:08:44.054 "r_mbytes_per_sec": 0, 00:08:44.054 "w_mbytes_per_sec": 0 00:08:44.054 }, 00:08:44.054 "claimed": false, 00:08:44.054 "zoned": false, 00:08:44.054 "supported_io_types": { 00:08:44.054 "read": true, 00:08:44.054 "write": true, 00:08:44.054 "unmap": true, 00:08:44.054 "flush": true, 00:08:44.054 "reset": true, 00:08:44.054 "nvme_admin": false, 00:08:44.054 "nvme_io": false, 00:08:44.054 "nvme_io_md": false, 00:08:44.054 "write_zeroes": true, 00:08:44.054 "zcopy": false, 00:08:44.054 "get_zone_info": false, 00:08:44.054 "zone_management": false, 00:08:44.054 "zone_append": false, 00:08:44.054 "compare": false, 00:08:44.055 "compare_and_write": false, 00:08:44.055 "abort": false, 00:08:44.055 "seek_hole": false, 00:08:44.055 "seek_data": false, 00:08:44.055 "copy": false, 00:08:44.055 "nvme_iov_md": false 00:08:44.055 }, 00:08:44.055 "memory_domains": [ 00:08:44.055 { 00:08:44.055 "dma_device_id": "system", 00:08:44.055 "dma_device_type": 1 00:08:44.055 }, 00:08:44.055 { 00:08:44.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.055 "dma_device_type": 2 00:08:44.055 }, 00:08:44.055 { 00:08:44.055 "dma_device_id": "system", 00:08:44.055 "dma_device_type": 1 00:08:44.055 }, 00:08:44.055 { 00:08:44.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.055 "dma_device_type": 2 00:08:44.055 }, 00:08:44.055 { 00:08:44.055 "dma_device_id": "system", 00:08:44.055 "dma_device_type": 1 00:08:44.055 }, 00:08:44.055 { 00:08:44.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.055 "dma_device_type": 2 00:08:44.055 } 00:08:44.055 ], 00:08:44.055 "driver_specific": { 00:08:44.055 "raid": { 00:08:44.055 "uuid": "2b57e0d5-758b-4294-849c-ef9e5e840149", 00:08:44.055 "strip_size_kb": 64, 00:08:44.055 "state": "online", 00:08:44.055 "raid_level": "raid0", 00:08:44.055 "superblock": true, 00:08:44.055 "num_base_bdevs": 3, 00:08:44.055 "num_base_bdevs_discovered": 3, 00:08:44.055 "num_base_bdevs_operational": 3, 00:08:44.055 "base_bdevs_list": [ 00:08:44.055 { 00:08:44.055 "name": "BaseBdev1", 00:08:44.055 "uuid": "3220382a-e431-4ebb-89fe-d9752bf4dcdc", 00:08:44.055 "is_configured": true, 00:08:44.055 "data_offset": 2048, 00:08:44.055 "data_size": 63488 00:08:44.055 }, 00:08:44.055 { 00:08:44.055 "name": "BaseBdev2", 00:08:44.055 "uuid": "601142a5-1594-491a-99ab-a79df38c102f", 00:08:44.055 "is_configured": true, 00:08:44.055 "data_offset": 2048, 00:08:44.055 "data_size": 63488 00:08:44.055 }, 00:08:44.055 { 00:08:44.055 "name": "BaseBdev3", 00:08:44.055 "uuid": "a81e150e-5272-4e42-831b-817f07867f75", 00:08:44.055 "is_configured": true, 00:08:44.055 "data_offset": 2048, 00:08:44.055 "data_size": 63488 00:08:44.055 } 00:08:44.055 ] 00:08:44.055 } 00:08:44.055 } 00:08:44.055 }' 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:44.055 BaseBdev2 00:08:44.055 BaseBdev3' 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.055 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.318 [2024-11-26 18:55:35.519266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.318 [2024-11-26 18:55:35.519303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.318 [2024-11-26 18:55:35.519381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.318 "name": "Existed_Raid", 00:08:44.318 "uuid": "2b57e0d5-758b-4294-849c-ef9e5e840149", 00:08:44.318 "strip_size_kb": 64, 00:08:44.318 "state": "offline", 00:08:44.318 "raid_level": "raid0", 00:08:44.318 "superblock": true, 00:08:44.318 "num_base_bdevs": 3, 00:08:44.318 "num_base_bdevs_discovered": 2, 00:08:44.318 "num_base_bdevs_operational": 2, 00:08:44.318 "base_bdevs_list": [ 00:08:44.318 { 00:08:44.318 "name": null, 00:08:44.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.318 "is_configured": false, 00:08:44.318 "data_offset": 0, 00:08:44.318 "data_size": 63488 00:08:44.318 }, 00:08:44.318 { 00:08:44.318 "name": "BaseBdev2", 00:08:44.318 "uuid": "601142a5-1594-491a-99ab-a79df38c102f", 00:08:44.318 "is_configured": true, 00:08:44.318 "data_offset": 2048, 00:08:44.318 "data_size": 63488 00:08:44.318 }, 00:08:44.318 { 00:08:44.318 "name": "BaseBdev3", 00:08:44.318 "uuid": "a81e150e-5272-4e42-831b-817f07867f75", 00:08:44.318 "is_configured": true, 00:08:44.318 "data_offset": 2048, 00:08:44.318 "data_size": 63488 00:08:44.318 } 00:08:44.318 ] 00:08:44.318 }' 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.318 18:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.887 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.887 [2024-11-26 18:55:36.180622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.145 [2024-11-26 18:55:36.328544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:45.145 [2024-11-26 18:55:36.328613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:45.145 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:45.146 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:45.146 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.146 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.146 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.146 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.404 BaseBdev2 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.404 [ 00:08:45.404 { 00:08:45.404 "name": "BaseBdev2", 00:08:45.404 "aliases": [ 00:08:45.404 "b05a5d7e-a938-49fc-a153-68fc2be3b398" 00:08:45.404 ], 00:08:45.404 "product_name": "Malloc disk", 00:08:45.404 "block_size": 512, 00:08:45.404 "num_blocks": 65536, 00:08:45.404 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:45.404 "assigned_rate_limits": { 00:08:45.404 "rw_ios_per_sec": 0, 00:08:45.404 "rw_mbytes_per_sec": 0, 00:08:45.404 "r_mbytes_per_sec": 0, 00:08:45.404 "w_mbytes_per_sec": 0 00:08:45.404 }, 00:08:45.404 "claimed": false, 00:08:45.404 "zoned": false, 00:08:45.404 "supported_io_types": { 00:08:45.404 "read": true, 00:08:45.404 "write": true, 00:08:45.404 "unmap": true, 00:08:45.404 "flush": true, 00:08:45.404 "reset": true, 00:08:45.404 "nvme_admin": false, 00:08:45.404 "nvme_io": false, 00:08:45.404 "nvme_io_md": false, 00:08:45.404 "write_zeroes": true, 00:08:45.404 "zcopy": true, 00:08:45.404 "get_zone_info": false, 00:08:45.404 "zone_management": false, 00:08:45.404 "zone_append": false, 00:08:45.404 "compare": false, 00:08:45.404 "compare_and_write": false, 00:08:45.404 "abort": true, 00:08:45.404 "seek_hole": false, 00:08:45.404 "seek_data": false, 00:08:45.404 "copy": true, 00:08:45.404 "nvme_iov_md": false 00:08:45.404 }, 00:08:45.404 "memory_domains": [ 00:08:45.404 { 00:08:45.404 "dma_device_id": "system", 00:08:45.404 "dma_device_type": 1 00:08:45.404 }, 00:08:45.404 { 00:08:45.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.404 "dma_device_type": 2 00:08:45.404 } 00:08:45.404 ], 00:08:45.404 "driver_specific": {} 00:08:45.404 } 00:08:45.404 ] 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.404 BaseBdev3 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:45.404 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.405 [ 00:08:45.405 { 00:08:45.405 "name": "BaseBdev3", 00:08:45.405 "aliases": [ 00:08:45.405 "af2bac32-a840-4908-9dd5-99328edc2dcb" 00:08:45.405 ], 00:08:45.405 "product_name": "Malloc disk", 00:08:45.405 "block_size": 512, 00:08:45.405 "num_blocks": 65536, 00:08:45.405 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:45.405 "assigned_rate_limits": { 00:08:45.405 "rw_ios_per_sec": 0, 00:08:45.405 "rw_mbytes_per_sec": 0, 00:08:45.405 "r_mbytes_per_sec": 0, 00:08:45.405 "w_mbytes_per_sec": 0 00:08:45.405 }, 00:08:45.405 "claimed": false, 00:08:45.405 "zoned": false, 00:08:45.405 "supported_io_types": { 00:08:45.405 "read": true, 00:08:45.405 "write": true, 00:08:45.405 "unmap": true, 00:08:45.405 "flush": true, 00:08:45.405 "reset": true, 00:08:45.405 "nvme_admin": false, 00:08:45.405 "nvme_io": false, 00:08:45.405 "nvme_io_md": false, 00:08:45.405 "write_zeroes": true, 00:08:45.405 "zcopy": true, 00:08:45.405 "get_zone_info": false, 00:08:45.405 "zone_management": false, 00:08:45.405 "zone_append": false, 00:08:45.405 "compare": false, 00:08:45.405 "compare_and_write": false, 00:08:45.405 "abort": true, 00:08:45.405 "seek_hole": false, 00:08:45.405 "seek_data": false, 00:08:45.405 "copy": true, 00:08:45.405 "nvme_iov_md": false 00:08:45.405 }, 00:08:45.405 "memory_domains": [ 00:08:45.405 { 00:08:45.405 "dma_device_id": "system", 00:08:45.405 "dma_device_type": 1 00:08:45.405 }, 00:08:45.405 { 00:08:45.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.405 "dma_device_type": 2 00:08:45.405 } 00:08:45.405 ], 00:08:45.405 "driver_specific": {} 00:08:45.405 } 00:08:45.405 ] 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.405 [2024-11-26 18:55:36.642245] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.405 [2024-11-26 18:55:36.642294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.405 [2024-11-26 18:55:36.642328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.405 [2024-11-26 18:55:36.644728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.405 "name": "Existed_Raid", 00:08:45.405 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:45.405 "strip_size_kb": 64, 00:08:45.405 "state": "configuring", 00:08:45.405 "raid_level": "raid0", 00:08:45.405 "superblock": true, 00:08:45.405 "num_base_bdevs": 3, 00:08:45.405 "num_base_bdevs_discovered": 2, 00:08:45.405 "num_base_bdevs_operational": 3, 00:08:45.405 "base_bdevs_list": [ 00:08:45.405 { 00:08:45.405 "name": "BaseBdev1", 00:08:45.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.405 "is_configured": false, 00:08:45.405 "data_offset": 0, 00:08:45.405 "data_size": 0 00:08:45.405 }, 00:08:45.405 { 00:08:45.405 "name": "BaseBdev2", 00:08:45.405 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:45.405 "is_configured": true, 00:08:45.405 "data_offset": 2048, 00:08:45.405 "data_size": 63488 00:08:45.405 }, 00:08:45.405 { 00:08:45.405 "name": "BaseBdev3", 00:08:45.405 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:45.405 "is_configured": true, 00:08:45.405 "data_offset": 2048, 00:08:45.405 "data_size": 63488 00:08:45.405 } 00:08:45.405 ] 00:08:45.405 }' 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.405 18:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.971 [2024-11-26 18:55:37.158425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.971 "name": "Existed_Raid", 00:08:45.971 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:45.971 "strip_size_kb": 64, 00:08:45.971 "state": "configuring", 00:08:45.971 "raid_level": "raid0", 00:08:45.971 "superblock": true, 00:08:45.971 "num_base_bdevs": 3, 00:08:45.971 "num_base_bdevs_discovered": 1, 00:08:45.971 "num_base_bdevs_operational": 3, 00:08:45.971 "base_bdevs_list": [ 00:08:45.971 { 00:08:45.971 "name": "BaseBdev1", 00:08:45.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.971 "is_configured": false, 00:08:45.971 "data_offset": 0, 00:08:45.971 "data_size": 0 00:08:45.971 }, 00:08:45.971 { 00:08:45.971 "name": null, 00:08:45.971 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:45.971 "is_configured": false, 00:08:45.971 "data_offset": 0, 00:08:45.971 "data_size": 63488 00:08:45.971 }, 00:08:45.971 { 00:08:45.971 "name": "BaseBdev3", 00:08:45.971 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:45.971 "is_configured": true, 00:08:45.971 "data_offset": 2048, 00:08:45.971 "data_size": 63488 00:08:45.971 } 00:08:45.971 ] 00:08:45.971 }' 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.971 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.541 [2024-11-26 18:55:37.756737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.541 BaseBdev1 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.541 [ 00:08:46.541 { 00:08:46.541 "name": "BaseBdev1", 00:08:46.541 "aliases": [ 00:08:46.541 "629db6b2-521e-4af1-9d55-35ad46c51295" 00:08:46.541 ], 00:08:46.541 "product_name": "Malloc disk", 00:08:46.541 "block_size": 512, 00:08:46.541 "num_blocks": 65536, 00:08:46.541 "uuid": "629db6b2-521e-4af1-9d55-35ad46c51295", 00:08:46.541 "assigned_rate_limits": { 00:08:46.541 "rw_ios_per_sec": 0, 00:08:46.541 "rw_mbytes_per_sec": 0, 00:08:46.541 "r_mbytes_per_sec": 0, 00:08:46.541 "w_mbytes_per_sec": 0 00:08:46.541 }, 00:08:46.541 "claimed": true, 00:08:46.541 "claim_type": "exclusive_write", 00:08:46.541 "zoned": false, 00:08:46.541 "supported_io_types": { 00:08:46.541 "read": true, 00:08:46.541 "write": true, 00:08:46.541 "unmap": true, 00:08:46.541 "flush": true, 00:08:46.541 "reset": true, 00:08:46.541 "nvme_admin": false, 00:08:46.541 "nvme_io": false, 00:08:46.541 "nvme_io_md": false, 00:08:46.541 "write_zeroes": true, 00:08:46.541 "zcopy": true, 00:08:46.541 "get_zone_info": false, 00:08:46.541 "zone_management": false, 00:08:46.541 "zone_append": false, 00:08:46.541 "compare": false, 00:08:46.541 "compare_and_write": false, 00:08:46.541 "abort": true, 00:08:46.541 "seek_hole": false, 00:08:46.541 "seek_data": false, 00:08:46.541 "copy": true, 00:08:46.541 "nvme_iov_md": false 00:08:46.541 }, 00:08:46.541 "memory_domains": [ 00:08:46.541 { 00:08:46.541 "dma_device_id": "system", 00:08:46.541 "dma_device_type": 1 00:08:46.541 }, 00:08:46.541 { 00:08:46.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.541 "dma_device_type": 2 00:08:46.541 } 00:08:46.541 ], 00:08:46.541 "driver_specific": {} 00:08:46.541 } 00:08:46.541 ] 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.541 "name": "Existed_Raid", 00:08:46.541 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:46.541 "strip_size_kb": 64, 00:08:46.541 "state": "configuring", 00:08:46.541 "raid_level": "raid0", 00:08:46.541 "superblock": true, 00:08:46.541 "num_base_bdevs": 3, 00:08:46.541 "num_base_bdevs_discovered": 2, 00:08:46.541 "num_base_bdevs_operational": 3, 00:08:46.541 "base_bdevs_list": [ 00:08:46.541 { 00:08:46.541 "name": "BaseBdev1", 00:08:46.541 "uuid": "629db6b2-521e-4af1-9d55-35ad46c51295", 00:08:46.541 "is_configured": true, 00:08:46.541 "data_offset": 2048, 00:08:46.541 "data_size": 63488 00:08:46.541 }, 00:08:46.541 { 00:08:46.541 "name": null, 00:08:46.541 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:46.541 "is_configured": false, 00:08:46.541 "data_offset": 0, 00:08:46.541 "data_size": 63488 00:08:46.541 }, 00:08:46.541 { 00:08:46.541 "name": "BaseBdev3", 00:08:46.541 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:46.541 "is_configured": true, 00:08:46.541 "data_offset": 2048, 00:08:46.541 "data_size": 63488 00:08:46.541 } 00:08:46.541 ] 00:08:46.541 }' 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.541 18:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.112 [2024-11-26 18:55:38.369061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.112 "name": "Existed_Raid", 00:08:47.112 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:47.112 "strip_size_kb": 64, 00:08:47.112 "state": "configuring", 00:08:47.112 "raid_level": "raid0", 00:08:47.112 "superblock": true, 00:08:47.112 "num_base_bdevs": 3, 00:08:47.112 "num_base_bdevs_discovered": 1, 00:08:47.112 "num_base_bdevs_operational": 3, 00:08:47.112 "base_bdevs_list": [ 00:08:47.112 { 00:08:47.112 "name": "BaseBdev1", 00:08:47.112 "uuid": "629db6b2-521e-4af1-9d55-35ad46c51295", 00:08:47.112 "is_configured": true, 00:08:47.112 "data_offset": 2048, 00:08:47.112 "data_size": 63488 00:08:47.112 }, 00:08:47.112 { 00:08:47.112 "name": null, 00:08:47.112 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:47.112 "is_configured": false, 00:08:47.112 "data_offset": 0, 00:08:47.112 "data_size": 63488 00:08:47.112 }, 00:08:47.112 { 00:08:47.112 "name": null, 00:08:47.112 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:47.112 "is_configured": false, 00:08:47.112 "data_offset": 0, 00:08:47.112 "data_size": 63488 00:08:47.112 } 00:08:47.112 ] 00:08:47.112 }' 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.112 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.681 [2024-11-26 18:55:38.937360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.681 "name": "Existed_Raid", 00:08:47.681 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:47.681 "strip_size_kb": 64, 00:08:47.681 "state": "configuring", 00:08:47.681 "raid_level": "raid0", 00:08:47.681 "superblock": true, 00:08:47.681 "num_base_bdevs": 3, 00:08:47.681 "num_base_bdevs_discovered": 2, 00:08:47.681 "num_base_bdevs_operational": 3, 00:08:47.681 "base_bdevs_list": [ 00:08:47.681 { 00:08:47.681 "name": "BaseBdev1", 00:08:47.681 "uuid": "629db6b2-521e-4af1-9d55-35ad46c51295", 00:08:47.681 "is_configured": true, 00:08:47.681 "data_offset": 2048, 00:08:47.681 "data_size": 63488 00:08:47.681 }, 00:08:47.681 { 00:08:47.681 "name": null, 00:08:47.681 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:47.681 "is_configured": false, 00:08:47.681 "data_offset": 0, 00:08:47.681 "data_size": 63488 00:08:47.681 }, 00:08:47.681 { 00:08:47.681 "name": "BaseBdev3", 00:08:47.681 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:47.681 "is_configured": true, 00:08:47.681 "data_offset": 2048, 00:08:47.681 "data_size": 63488 00:08:47.681 } 00:08:47.681 ] 00:08:47.681 }' 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.681 18:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.289 [2024-11-26 18:55:39.553514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.289 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.547 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.547 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.547 "name": "Existed_Raid", 00:08:48.547 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:48.547 "strip_size_kb": 64, 00:08:48.547 "state": "configuring", 00:08:48.547 "raid_level": "raid0", 00:08:48.547 "superblock": true, 00:08:48.547 "num_base_bdevs": 3, 00:08:48.547 "num_base_bdevs_discovered": 1, 00:08:48.547 "num_base_bdevs_operational": 3, 00:08:48.547 "base_bdevs_list": [ 00:08:48.547 { 00:08:48.547 "name": null, 00:08:48.547 "uuid": "629db6b2-521e-4af1-9d55-35ad46c51295", 00:08:48.547 "is_configured": false, 00:08:48.547 "data_offset": 0, 00:08:48.547 "data_size": 63488 00:08:48.547 }, 00:08:48.547 { 00:08:48.547 "name": null, 00:08:48.547 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:48.547 "is_configured": false, 00:08:48.547 "data_offset": 0, 00:08:48.547 "data_size": 63488 00:08:48.547 }, 00:08:48.547 { 00:08:48.547 "name": "BaseBdev3", 00:08:48.547 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:48.547 "is_configured": true, 00:08:48.547 "data_offset": 2048, 00:08:48.547 "data_size": 63488 00:08:48.547 } 00:08:48.547 ] 00:08:48.547 }' 00:08:48.547 18:55:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.547 18:55:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.805 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:48.805 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.805 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.805 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.805 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.064 [2024-11-26 18:55:40.194594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.064 "name": "Existed_Raid", 00:08:49.064 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:49.064 "strip_size_kb": 64, 00:08:49.064 "state": "configuring", 00:08:49.064 "raid_level": "raid0", 00:08:49.064 "superblock": true, 00:08:49.064 "num_base_bdevs": 3, 00:08:49.064 "num_base_bdevs_discovered": 2, 00:08:49.064 "num_base_bdevs_operational": 3, 00:08:49.064 "base_bdevs_list": [ 00:08:49.064 { 00:08:49.064 "name": null, 00:08:49.064 "uuid": "629db6b2-521e-4af1-9d55-35ad46c51295", 00:08:49.064 "is_configured": false, 00:08:49.064 "data_offset": 0, 00:08:49.064 "data_size": 63488 00:08:49.064 }, 00:08:49.064 { 00:08:49.064 "name": "BaseBdev2", 00:08:49.064 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:49.064 "is_configured": true, 00:08:49.064 "data_offset": 2048, 00:08:49.064 "data_size": 63488 00:08:49.064 }, 00:08:49.064 { 00:08:49.064 "name": "BaseBdev3", 00:08:49.064 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:49.064 "is_configured": true, 00:08:49.064 "data_offset": 2048, 00:08:49.064 "data_size": 63488 00:08:49.064 } 00:08:49.064 ] 00:08:49.064 }' 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.064 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 629db6b2-521e-4af1-9d55-35ad46c51295 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.632 [2024-11-26 18:55:40.866210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:49.632 [2024-11-26 18:55:40.866512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:49.632 [2024-11-26 18:55:40.866537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:49.632 [2024-11-26 18:55:40.866858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:49.632 [2024-11-26 18:55:40.867082] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:49.632 [2024-11-26 18:55:40.867106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:49.632 [2024-11-26 18:55:40.867302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.632 NewBaseBdev 00:08:49.632 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.633 [ 00:08:49.633 { 00:08:49.633 "name": "NewBaseBdev", 00:08:49.633 "aliases": [ 00:08:49.633 "629db6b2-521e-4af1-9d55-35ad46c51295" 00:08:49.633 ], 00:08:49.633 "product_name": "Malloc disk", 00:08:49.633 "block_size": 512, 00:08:49.633 "num_blocks": 65536, 00:08:49.633 "uuid": "629db6b2-521e-4af1-9d55-35ad46c51295", 00:08:49.633 "assigned_rate_limits": { 00:08:49.633 "rw_ios_per_sec": 0, 00:08:49.633 "rw_mbytes_per_sec": 0, 00:08:49.633 "r_mbytes_per_sec": 0, 00:08:49.633 "w_mbytes_per_sec": 0 00:08:49.633 }, 00:08:49.633 "claimed": true, 00:08:49.633 "claim_type": "exclusive_write", 00:08:49.633 "zoned": false, 00:08:49.633 "supported_io_types": { 00:08:49.633 "read": true, 00:08:49.633 "write": true, 00:08:49.633 "unmap": true, 00:08:49.633 "flush": true, 00:08:49.633 "reset": true, 00:08:49.633 "nvme_admin": false, 00:08:49.633 "nvme_io": false, 00:08:49.633 "nvme_io_md": false, 00:08:49.633 "write_zeroes": true, 00:08:49.633 "zcopy": true, 00:08:49.633 "get_zone_info": false, 00:08:49.633 "zone_management": false, 00:08:49.633 "zone_append": false, 00:08:49.633 "compare": false, 00:08:49.633 "compare_and_write": false, 00:08:49.633 "abort": true, 00:08:49.633 "seek_hole": false, 00:08:49.633 "seek_data": false, 00:08:49.633 "copy": true, 00:08:49.633 "nvme_iov_md": false 00:08:49.633 }, 00:08:49.633 "memory_domains": [ 00:08:49.633 { 00:08:49.633 "dma_device_id": "system", 00:08:49.633 "dma_device_type": 1 00:08:49.633 }, 00:08:49.633 { 00:08:49.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.633 "dma_device_type": 2 00:08:49.633 } 00:08:49.633 ], 00:08:49.633 "driver_specific": {} 00:08:49.633 } 00:08:49.633 ] 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.633 "name": "Existed_Raid", 00:08:49.633 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:49.633 "strip_size_kb": 64, 00:08:49.633 "state": "online", 00:08:49.633 "raid_level": "raid0", 00:08:49.633 "superblock": true, 00:08:49.633 "num_base_bdevs": 3, 00:08:49.633 "num_base_bdevs_discovered": 3, 00:08:49.633 "num_base_bdevs_operational": 3, 00:08:49.633 "base_bdevs_list": [ 00:08:49.633 { 00:08:49.633 "name": "NewBaseBdev", 00:08:49.633 "uuid": "629db6b2-521e-4af1-9d55-35ad46c51295", 00:08:49.633 "is_configured": true, 00:08:49.633 "data_offset": 2048, 00:08:49.633 "data_size": 63488 00:08:49.633 }, 00:08:49.633 { 00:08:49.633 "name": "BaseBdev2", 00:08:49.633 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:49.633 "is_configured": true, 00:08:49.633 "data_offset": 2048, 00:08:49.633 "data_size": 63488 00:08:49.633 }, 00:08:49.633 { 00:08:49.633 "name": "BaseBdev3", 00:08:49.633 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:49.633 "is_configured": true, 00:08:49.633 "data_offset": 2048, 00:08:49.633 "data_size": 63488 00:08:49.633 } 00:08:49.633 ] 00:08:49.633 }' 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.633 18:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.223 [2024-11-26 18:55:41.442801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.223 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.223 "name": "Existed_Raid", 00:08:50.223 "aliases": [ 00:08:50.223 "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1" 00:08:50.223 ], 00:08:50.223 "product_name": "Raid Volume", 00:08:50.223 "block_size": 512, 00:08:50.223 "num_blocks": 190464, 00:08:50.223 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:50.223 "assigned_rate_limits": { 00:08:50.223 "rw_ios_per_sec": 0, 00:08:50.223 "rw_mbytes_per_sec": 0, 00:08:50.223 "r_mbytes_per_sec": 0, 00:08:50.223 "w_mbytes_per_sec": 0 00:08:50.223 }, 00:08:50.223 "claimed": false, 00:08:50.223 "zoned": false, 00:08:50.223 "supported_io_types": { 00:08:50.223 "read": true, 00:08:50.223 "write": true, 00:08:50.223 "unmap": true, 00:08:50.223 "flush": true, 00:08:50.223 "reset": true, 00:08:50.223 "nvme_admin": false, 00:08:50.224 "nvme_io": false, 00:08:50.224 "nvme_io_md": false, 00:08:50.224 "write_zeroes": true, 00:08:50.224 "zcopy": false, 00:08:50.224 "get_zone_info": false, 00:08:50.224 "zone_management": false, 00:08:50.224 "zone_append": false, 00:08:50.224 "compare": false, 00:08:50.224 "compare_and_write": false, 00:08:50.224 "abort": false, 00:08:50.224 "seek_hole": false, 00:08:50.224 "seek_data": false, 00:08:50.224 "copy": false, 00:08:50.224 "nvme_iov_md": false 00:08:50.224 }, 00:08:50.224 "memory_domains": [ 00:08:50.224 { 00:08:50.224 "dma_device_id": "system", 00:08:50.224 "dma_device_type": 1 00:08:50.224 }, 00:08:50.224 { 00:08:50.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.224 "dma_device_type": 2 00:08:50.224 }, 00:08:50.224 { 00:08:50.224 "dma_device_id": "system", 00:08:50.224 "dma_device_type": 1 00:08:50.224 }, 00:08:50.224 { 00:08:50.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.224 "dma_device_type": 2 00:08:50.224 }, 00:08:50.224 { 00:08:50.224 "dma_device_id": "system", 00:08:50.224 "dma_device_type": 1 00:08:50.224 }, 00:08:50.224 { 00:08:50.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.224 "dma_device_type": 2 00:08:50.224 } 00:08:50.224 ], 00:08:50.224 "driver_specific": { 00:08:50.224 "raid": { 00:08:50.224 "uuid": "3e1a00c8-2cc8-43a1-b6dd-3d657d25dff1", 00:08:50.224 "strip_size_kb": 64, 00:08:50.224 "state": "online", 00:08:50.224 "raid_level": "raid0", 00:08:50.224 "superblock": true, 00:08:50.224 "num_base_bdevs": 3, 00:08:50.224 "num_base_bdevs_discovered": 3, 00:08:50.224 "num_base_bdevs_operational": 3, 00:08:50.224 "base_bdevs_list": [ 00:08:50.224 { 00:08:50.224 "name": "NewBaseBdev", 00:08:50.224 "uuid": "629db6b2-521e-4af1-9d55-35ad46c51295", 00:08:50.224 "is_configured": true, 00:08:50.224 "data_offset": 2048, 00:08:50.224 "data_size": 63488 00:08:50.224 }, 00:08:50.224 { 00:08:50.224 "name": "BaseBdev2", 00:08:50.224 "uuid": "b05a5d7e-a938-49fc-a153-68fc2be3b398", 00:08:50.224 "is_configured": true, 00:08:50.224 "data_offset": 2048, 00:08:50.224 "data_size": 63488 00:08:50.224 }, 00:08:50.224 { 00:08:50.224 "name": "BaseBdev3", 00:08:50.224 "uuid": "af2bac32-a840-4908-9dd5-99328edc2dcb", 00:08:50.224 "is_configured": true, 00:08:50.224 "data_offset": 2048, 00:08:50.224 "data_size": 63488 00:08:50.224 } 00:08:50.224 ] 00:08:50.224 } 00:08:50.224 } 00:08:50.224 }' 00:08:50.224 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.224 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:50.224 BaseBdev2 00:08:50.224 BaseBdev3' 00:08:50.224 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.482 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.482 [2024-11-26 18:55:41.758562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.482 [2024-11-26 18:55:41.758601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.482 [2024-11-26 18:55:41.758725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.483 [2024-11-26 18:55:41.758800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.483 [2024-11-26 18:55:41.758835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64462 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64462 ']' 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64462 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64462 00:08:50.483 killing process with pid 64462 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64462' 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64462 00:08:50.483 [2024-11-26 18:55:41.798935] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.483 18:55:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64462 00:08:50.740 [2024-11-26 18:55:42.076959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.179 18:55:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:52.179 00:08:52.179 real 0m11.854s 00:08:52.179 user 0m19.601s 00:08:52.179 sys 0m1.615s 00:08:52.179 ************************************ 00:08:52.179 END TEST raid_state_function_test_sb 00:08:52.179 ************************************ 00:08:52.179 18:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.179 18:55:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.179 18:55:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:52.179 18:55:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:52.179 18:55:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.179 18:55:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.179 ************************************ 00:08:52.179 START TEST raid_superblock_test 00:08:52.179 ************************************ 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65099 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65099 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65099 ']' 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.179 18:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.179 [2024-11-26 18:55:43.317353] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:08:52.179 [2024-11-26 18:55:43.317544] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65099 ] 00:08:52.179 [2024-11-26 18:55:43.506499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.437 [2024-11-26 18:55:43.629663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.695 [2024-11-26 18:55:43.835606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.695 [2024-11-26 18:55:43.835669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.261 malloc1 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.261 [2024-11-26 18:55:44.369810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:53.261 [2024-11-26 18:55:44.369882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.261 [2024-11-26 18:55:44.369932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:53.261 [2024-11-26 18:55:44.369951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.261 [2024-11-26 18:55:44.372958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.261 [2024-11-26 18:55:44.373004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:53.261 pt1 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.261 malloc2 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.261 [2024-11-26 18:55:44.428286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.261 [2024-11-26 18:55:44.428359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.261 [2024-11-26 18:55:44.428398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:53.261 [2024-11-26 18:55:44.428414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.261 [2024-11-26 18:55:44.431299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.261 [2024-11-26 18:55:44.431345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.261 pt2 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.261 malloc3 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.261 [2024-11-26 18:55:44.494974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:53.261 [2024-11-26 18:55:44.495044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.261 [2024-11-26 18:55:44.495078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:53.261 [2024-11-26 18:55:44.495094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.261 [2024-11-26 18:55:44.498008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.261 [2024-11-26 18:55:44.498053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:53.261 pt3 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.261 [2024-11-26 18:55:44.507045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:53.261 [2024-11-26 18:55:44.509554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.261 [2024-11-26 18:55:44.509663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:53.261 [2024-11-26 18:55:44.509920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:53.261 [2024-11-26 18:55:44.509946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.261 [2024-11-26 18:55:44.510286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:53.261 [2024-11-26 18:55:44.510527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:53.261 [2024-11-26 18:55:44.510544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:53.261 [2024-11-26 18:55:44.510752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.261 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.261 "name": "raid_bdev1", 00:08:53.261 "uuid": "30f8a9c2-7835-49ff-9171-f5d591a71b40", 00:08:53.261 "strip_size_kb": 64, 00:08:53.261 "state": "online", 00:08:53.261 "raid_level": "raid0", 00:08:53.261 "superblock": true, 00:08:53.261 "num_base_bdevs": 3, 00:08:53.261 "num_base_bdevs_discovered": 3, 00:08:53.261 "num_base_bdevs_operational": 3, 00:08:53.261 "base_bdevs_list": [ 00:08:53.261 { 00:08:53.261 "name": "pt1", 00:08:53.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.261 "is_configured": true, 00:08:53.261 "data_offset": 2048, 00:08:53.261 "data_size": 63488 00:08:53.261 }, 00:08:53.261 { 00:08:53.261 "name": "pt2", 00:08:53.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.262 "is_configured": true, 00:08:53.262 "data_offset": 2048, 00:08:53.262 "data_size": 63488 00:08:53.262 }, 00:08:53.262 { 00:08:53.262 "name": "pt3", 00:08:53.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:53.262 "is_configured": true, 00:08:53.262 "data_offset": 2048, 00:08:53.262 "data_size": 63488 00:08:53.262 } 00:08:53.262 ] 00:08:53.262 }' 00:08:53.262 18:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.262 18:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.828 [2024-11-26 18:55:45.071593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.828 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.828 "name": "raid_bdev1", 00:08:53.828 "aliases": [ 00:08:53.828 "30f8a9c2-7835-49ff-9171-f5d591a71b40" 00:08:53.828 ], 00:08:53.828 "product_name": "Raid Volume", 00:08:53.828 "block_size": 512, 00:08:53.828 "num_blocks": 190464, 00:08:53.828 "uuid": "30f8a9c2-7835-49ff-9171-f5d591a71b40", 00:08:53.828 "assigned_rate_limits": { 00:08:53.828 "rw_ios_per_sec": 0, 00:08:53.828 "rw_mbytes_per_sec": 0, 00:08:53.828 "r_mbytes_per_sec": 0, 00:08:53.828 "w_mbytes_per_sec": 0 00:08:53.828 }, 00:08:53.829 "claimed": false, 00:08:53.829 "zoned": false, 00:08:53.829 "supported_io_types": { 00:08:53.829 "read": true, 00:08:53.829 "write": true, 00:08:53.829 "unmap": true, 00:08:53.829 "flush": true, 00:08:53.829 "reset": true, 00:08:53.829 "nvme_admin": false, 00:08:53.829 "nvme_io": false, 00:08:53.829 "nvme_io_md": false, 00:08:53.829 "write_zeroes": true, 00:08:53.829 "zcopy": false, 00:08:53.829 "get_zone_info": false, 00:08:53.829 "zone_management": false, 00:08:53.829 "zone_append": false, 00:08:53.829 "compare": false, 00:08:53.829 "compare_and_write": false, 00:08:53.829 "abort": false, 00:08:53.829 "seek_hole": false, 00:08:53.829 "seek_data": false, 00:08:53.829 "copy": false, 00:08:53.829 "nvme_iov_md": false 00:08:53.829 }, 00:08:53.829 "memory_domains": [ 00:08:53.829 { 00:08:53.829 "dma_device_id": "system", 00:08:53.829 "dma_device_type": 1 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.829 "dma_device_type": 2 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "dma_device_id": "system", 00:08:53.829 "dma_device_type": 1 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.829 "dma_device_type": 2 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "dma_device_id": "system", 00:08:53.829 "dma_device_type": 1 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.829 "dma_device_type": 2 00:08:53.829 } 00:08:53.829 ], 00:08:53.829 "driver_specific": { 00:08:53.829 "raid": { 00:08:53.829 "uuid": "30f8a9c2-7835-49ff-9171-f5d591a71b40", 00:08:53.829 "strip_size_kb": 64, 00:08:53.829 "state": "online", 00:08:53.829 "raid_level": "raid0", 00:08:53.829 "superblock": true, 00:08:53.829 "num_base_bdevs": 3, 00:08:53.829 "num_base_bdevs_discovered": 3, 00:08:53.829 "num_base_bdevs_operational": 3, 00:08:53.829 "base_bdevs_list": [ 00:08:53.829 { 00:08:53.829 "name": "pt1", 00:08:53.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.829 "is_configured": true, 00:08:53.829 "data_offset": 2048, 00:08:53.829 "data_size": 63488 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "name": "pt2", 00:08:53.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.829 "is_configured": true, 00:08:53.829 "data_offset": 2048, 00:08:53.829 "data_size": 63488 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "name": "pt3", 00:08:53.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:53.829 "is_configured": true, 00:08:53.829 "data_offset": 2048, 00:08:53.829 "data_size": 63488 00:08:53.829 } 00:08:53.829 ] 00:08:53.829 } 00:08:53.829 } 00:08:53.829 }' 00:08:53.829 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.829 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:53.829 pt2 00:08:53.829 pt3' 00:08:53.829 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 [2024-11-26 18:55:45.399632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=30f8a9c2-7835-49ff-9171-f5d591a71b40 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 30f8a9c2-7835-49ff-9171-f5d591a71b40 ']' 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 [2024-11-26 18:55:45.443286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.088 [2024-11-26 18:55:45.443326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.088 [2024-11-26 18:55:45.443439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.088 [2024-11-26 18:55:45.443560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.088 [2024-11-26 18:55:45.443576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.348 [2024-11-26 18:55:45.599403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:54.348 [2024-11-26 18:55:45.602254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:54.348 [2024-11-26 18:55:45.602504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:54.348 [2024-11-26 18:55:45.602721] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:54.348 [2024-11-26 18:55:45.602958] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:54.348 [2024-11-26 18:55:45.603187] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:54.348 [2024-11-26 18:55:45.603357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.348 [2024-11-26 18:55:45.603412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:54.348 request: 00:08:54.348 { 00:08:54.348 "name": "raid_bdev1", 00:08:54.348 "raid_level": "raid0", 00:08:54.348 "base_bdevs": [ 00:08:54.348 "malloc1", 00:08:54.348 "malloc2", 00:08:54.348 "malloc3" 00:08:54.348 ], 00:08:54.348 "strip_size_kb": 64, 00:08:54.348 "superblock": false, 00:08:54.348 "method": "bdev_raid_create", 00:08:54.348 "req_id": 1 00:08:54.348 } 00:08:54.348 Got JSON-RPC error response 00:08:54.348 response: 00:08:54.348 { 00:08:54.348 "code": -17, 00:08:54.348 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:54.348 } 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.348 [2024-11-26 18:55:45.667857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.348 [2024-11-26 18:55:45.667984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.348 [2024-11-26 18:55:45.668019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:54.348 [2024-11-26 18:55:45.668034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.348 [2024-11-26 18:55:45.671210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.348 [2024-11-26 18:55:45.671257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.348 [2024-11-26 18:55:45.671380] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:54.348 [2024-11-26 18:55:45.671465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.348 pt1 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.348 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.349 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.607 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.607 "name": "raid_bdev1", 00:08:54.607 "uuid": "30f8a9c2-7835-49ff-9171-f5d591a71b40", 00:08:54.607 "strip_size_kb": 64, 00:08:54.607 "state": "configuring", 00:08:54.607 "raid_level": "raid0", 00:08:54.607 "superblock": true, 00:08:54.607 "num_base_bdevs": 3, 00:08:54.607 "num_base_bdevs_discovered": 1, 00:08:54.607 "num_base_bdevs_operational": 3, 00:08:54.607 "base_bdevs_list": [ 00:08:54.607 { 00:08:54.607 "name": "pt1", 00:08:54.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.607 "is_configured": true, 00:08:54.607 "data_offset": 2048, 00:08:54.607 "data_size": 63488 00:08:54.607 }, 00:08:54.607 { 00:08:54.607 "name": null, 00:08:54.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.607 "is_configured": false, 00:08:54.607 "data_offset": 2048, 00:08:54.607 "data_size": 63488 00:08:54.607 }, 00:08:54.607 { 00:08:54.607 "name": null, 00:08:54.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.607 "is_configured": false, 00:08:54.607 "data_offset": 2048, 00:08:54.607 "data_size": 63488 00:08:54.607 } 00:08:54.607 ] 00:08:54.607 }' 00:08:54.607 18:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.607 18:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.868 [2024-11-26 18:55:46.200076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.868 [2024-11-26 18:55:46.200169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.868 [2024-11-26 18:55:46.200226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:54.868 [2024-11-26 18:55:46.200242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.868 [2024-11-26 18:55:46.200809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.868 [2024-11-26 18:55:46.200841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.868 [2024-11-26 18:55:46.201019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:54.868 [2024-11-26 18:55:46.201060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.868 pt2 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.868 [2024-11-26 18:55:46.208032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.868 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.129 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.129 "name": "raid_bdev1", 00:08:55.129 "uuid": "30f8a9c2-7835-49ff-9171-f5d591a71b40", 00:08:55.129 "strip_size_kb": 64, 00:08:55.129 "state": "configuring", 00:08:55.129 "raid_level": "raid0", 00:08:55.129 "superblock": true, 00:08:55.129 "num_base_bdevs": 3, 00:08:55.129 "num_base_bdevs_discovered": 1, 00:08:55.129 "num_base_bdevs_operational": 3, 00:08:55.129 "base_bdevs_list": [ 00:08:55.129 { 00:08:55.129 "name": "pt1", 00:08:55.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.129 "is_configured": true, 00:08:55.129 "data_offset": 2048, 00:08:55.129 "data_size": 63488 00:08:55.129 }, 00:08:55.129 { 00:08:55.129 "name": null, 00:08:55.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.129 "is_configured": false, 00:08:55.129 "data_offset": 0, 00:08:55.129 "data_size": 63488 00:08:55.129 }, 00:08:55.129 { 00:08:55.129 "name": null, 00:08:55.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.129 "is_configured": false, 00:08:55.129 "data_offset": 2048, 00:08:55.129 "data_size": 63488 00:08:55.129 } 00:08:55.129 ] 00:08:55.129 }' 00:08:55.129 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.129 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.389 [2024-11-26 18:55:46.728218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.389 [2024-11-26 18:55:46.728329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.389 [2024-11-26 18:55:46.728372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:55.389 [2024-11-26 18:55:46.728388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.389 [2024-11-26 18:55:46.729032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.389 [2024-11-26 18:55:46.729066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.389 [2024-11-26 18:55:46.729175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:55.389 [2024-11-26 18:55:46.729214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.389 pt2 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.389 [2024-11-26 18:55:46.740198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:55.389 [2024-11-26 18:55:46.740266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.389 [2024-11-26 18:55:46.740291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:55.389 [2024-11-26 18:55:46.740308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.389 [2024-11-26 18:55:46.740838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.389 [2024-11-26 18:55:46.740891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:55.389 [2024-11-26 18:55:46.741010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:55.389 [2024-11-26 18:55:46.741055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:55.389 [2024-11-26 18:55:46.741215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.389 [2024-11-26 18:55:46.741237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:55.389 [2024-11-26 18:55:46.741555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:55.389 [2024-11-26 18:55:46.741816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.389 [2024-11-26 18:55:46.741838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:55.389 [2024-11-26 18:55:46.742045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.389 pt3 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.389 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.655 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.655 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.655 "name": "raid_bdev1", 00:08:55.655 "uuid": "30f8a9c2-7835-49ff-9171-f5d591a71b40", 00:08:55.655 "strip_size_kb": 64, 00:08:55.655 "state": "online", 00:08:55.655 "raid_level": "raid0", 00:08:55.655 "superblock": true, 00:08:55.656 "num_base_bdevs": 3, 00:08:55.656 "num_base_bdevs_discovered": 3, 00:08:55.656 "num_base_bdevs_operational": 3, 00:08:55.656 "base_bdevs_list": [ 00:08:55.656 { 00:08:55.656 "name": "pt1", 00:08:55.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.656 "is_configured": true, 00:08:55.656 "data_offset": 2048, 00:08:55.656 "data_size": 63488 00:08:55.656 }, 00:08:55.656 { 00:08:55.656 "name": "pt2", 00:08:55.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.656 "is_configured": true, 00:08:55.656 "data_offset": 2048, 00:08:55.656 "data_size": 63488 00:08:55.656 }, 00:08:55.656 { 00:08:55.656 "name": "pt3", 00:08:55.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.656 "is_configured": true, 00:08:55.656 "data_offset": 2048, 00:08:55.656 "data_size": 63488 00:08:55.656 } 00:08:55.656 ] 00:08:55.656 }' 00:08:55.656 18:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.656 18:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.914 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.173 [2024-11-26 18:55:47.280809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.173 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.173 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.173 "name": "raid_bdev1", 00:08:56.173 "aliases": [ 00:08:56.173 "30f8a9c2-7835-49ff-9171-f5d591a71b40" 00:08:56.173 ], 00:08:56.173 "product_name": "Raid Volume", 00:08:56.173 "block_size": 512, 00:08:56.173 "num_blocks": 190464, 00:08:56.173 "uuid": "30f8a9c2-7835-49ff-9171-f5d591a71b40", 00:08:56.173 "assigned_rate_limits": { 00:08:56.173 "rw_ios_per_sec": 0, 00:08:56.173 "rw_mbytes_per_sec": 0, 00:08:56.173 "r_mbytes_per_sec": 0, 00:08:56.173 "w_mbytes_per_sec": 0 00:08:56.173 }, 00:08:56.173 "claimed": false, 00:08:56.173 "zoned": false, 00:08:56.173 "supported_io_types": { 00:08:56.173 "read": true, 00:08:56.173 "write": true, 00:08:56.173 "unmap": true, 00:08:56.173 "flush": true, 00:08:56.173 "reset": true, 00:08:56.173 "nvme_admin": false, 00:08:56.173 "nvme_io": false, 00:08:56.173 "nvme_io_md": false, 00:08:56.173 "write_zeroes": true, 00:08:56.173 "zcopy": false, 00:08:56.173 "get_zone_info": false, 00:08:56.173 "zone_management": false, 00:08:56.173 "zone_append": false, 00:08:56.173 "compare": false, 00:08:56.173 "compare_and_write": false, 00:08:56.173 "abort": false, 00:08:56.173 "seek_hole": false, 00:08:56.173 "seek_data": false, 00:08:56.173 "copy": false, 00:08:56.173 "nvme_iov_md": false 00:08:56.173 }, 00:08:56.173 "memory_domains": [ 00:08:56.173 { 00:08:56.173 "dma_device_id": "system", 00:08:56.173 "dma_device_type": 1 00:08:56.173 }, 00:08:56.173 { 00:08:56.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.173 "dma_device_type": 2 00:08:56.173 }, 00:08:56.173 { 00:08:56.173 "dma_device_id": "system", 00:08:56.173 "dma_device_type": 1 00:08:56.173 }, 00:08:56.173 { 00:08:56.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.173 "dma_device_type": 2 00:08:56.173 }, 00:08:56.173 { 00:08:56.173 "dma_device_id": "system", 00:08:56.173 "dma_device_type": 1 00:08:56.173 }, 00:08:56.173 { 00:08:56.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.173 "dma_device_type": 2 00:08:56.173 } 00:08:56.173 ], 00:08:56.173 "driver_specific": { 00:08:56.173 "raid": { 00:08:56.173 "uuid": "30f8a9c2-7835-49ff-9171-f5d591a71b40", 00:08:56.173 "strip_size_kb": 64, 00:08:56.173 "state": "online", 00:08:56.173 "raid_level": "raid0", 00:08:56.173 "superblock": true, 00:08:56.173 "num_base_bdevs": 3, 00:08:56.173 "num_base_bdevs_discovered": 3, 00:08:56.174 "num_base_bdevs_operational": 3, 00:08:56.174 "base_bdevs_list": [ 00:08:56.174 { 00:08:56.174 "name": "pt1", 00:08:56.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.174 "is_configured": true, 00:08:56.174 "data_offset": 2048, 00:08:56.174 "data_size": 63488 00:08:56.174 }, 00:08:56.174 { 00:08:56.174 "name": "pt2", 00:08:56.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.174 "is_configured": true, 00:08:56.174 "data_offset": 2048, 00:08:56.174 "data_size": 63488 00:08:56.174 }, 00:08:56.174 { 00:08:56.174 "name": "pt3", 00:08:56.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.174 "is_configured": true, 00:08:56.174 "data_offset": 2048, 00:08:56.174 "data_size": 63488 00:08:56.174 } 00:08:56.174 ] 00:08:56.174 } 00:08:56.174 } 00:08:56.174 }' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:56.174 pt2 00:08:56.174 pt3' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.174 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.433 [2024-11-26 18:55:47.584959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 30f8a9c2-7835-49ff-9171-f5d591a71b40 '!=' 30f8a9c2-7835-49ff-9171-f5d591a71b40 ']' 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65099 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65099 ']' 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65099 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65099 00:08:56.433 killing process with pid 65099 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65099' 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65099 00:08:56.433 [2024-11-26 18:55:47.664855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.433 18:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65099 00:08:56.433 [2024-11-26 18:55:47.665039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.433 [2024-11-26 18:55:47.665130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.433 [2024-11-26 18:55:47.665152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:56.692 [2024-11-26 18:55:47.943981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.064 ************************************ 00:08:58.064 END TEST raid_superblock_test 00:08:58.064 ************************************ 00:08:58.064 18:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:58.064 00:08:58.064 real 0m5.810s 00:08:58.064 user 0m8.761s 00:08:58.064 sys 0m0.850s 00:08:58.064 18:55:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.064 18:55:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.064 18:55:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:58.064 18:55:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:58.064 18:55:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.064 18:55:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.064 ************************************ 00:08:58.064 START TEST raid_read_error_test 00:08:58.064 ************************************ 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yaUEMar4PV 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65358 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65358 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65358 ']' 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.064 18:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.064 [2024-11-26 18:55:49.196800] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:08:58.064 [2024-11-26 18:55:49.197261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65358 ] 00:08:58.064 [2024-11-26 18:55:49.386149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.322 [2024-11-26 18:55:49.521606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.580 [2024-11-26 18:55:49.732859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.580 [2024-11-26 18:55:49.732934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.147 BaseBdev1_malloc 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.147 true 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.147 [2024-11-26 18:55:50.347120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:59.147 [2024-11-26 18:55:50.347210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.147 [2024-11-26 18:55:50.347242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:59.147 [2024-11-26 18:55:50.347261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.147 [2024-11-26 18:55:50.350200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.147 [2024-11-26 18:55:50.350405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:59.147 BaseBdev1 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.147 BaseBdev2_malloc 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.147 true 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.147 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.147 [2024-11-26 18:55:50.413309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:59.147 [2024-11-26 18:55:50.413556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.148 [2024-11-26 18:55:50.413595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:59.148 [2024-11-26 18:55:50.413623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.148 [2024-11-26 18:55:50.416673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.148 [2024-11-26 18:55:50.416728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:59.148 BaseBdev2 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.148 BaseBdev3_malloc 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.148 true 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.148 [2024-11-26 18:55:50.489075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:59.148 [2024-11-26 18:55:50.489145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.148 [2024-11-26 18:55:50.489174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:59.148 [2024-11-26 18:55:50.489192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.148 [2024-11-26 18:55:50.492144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.148 [2024-11-26 18:55:50.492326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:59.148 BaseBdev3 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.148 [2024-11-26 18:55:50.501275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.148 [2024-11-26 18:55:50.504047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.148 [2024-11-26 18:55:50.504291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.148 [2024-11-26 18:55:50.504687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:59.148 [2024-11-26 18:55:50.504820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.148 [2024-11-26 18:55:50.505257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:59.148 [2024-11-26 18:55:50.505606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:59.148 [2024-11-26 18:55:50.505754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:59.148 [2024-11-26 18:55:50.506150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.148 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.420 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.420 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.420 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.420 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.420 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.420 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.420 "name": "raid_bdev1", 00:08:59.420 "uuid": "5ea7a6be-a710-4e9c-b68e-3583d029ac28", 00:08:59.420 "strip_size_kb": 64, 00:08:59.420 "state": "online", 00:08:59.420 "raid_level": "raid0", 00:08:59.420 "superblock": true, 00:08:59.420 "num_base_bdevs": 3, 00:08:59.420 "num_base_bdevs_discovered": 3, 00:08:59.420 "num_base_bdevs_operational": 3, 00:08:59.420 "base_bdevs_list": [ 00:08:59.420 { 00:08:59.420 "name": "BaseBdev1", 00:08:59.420 "uuid": "1232379c-1ef9-589b-a219-84e5ea0ec8d4", 00:08:59.420 "is_configured": true, 00:08:59.420 "data_offset": 2048, 00:08:59.420 "data_size": 63488 00:08:59.420 }, 00:08:59.420 { 00:08:59.420 "name": "BaseBdev2", 00:08:59.420 "uuid": "5360499a-f64e-56f4-886b-3bd9084bcb95", 00:08:59.420 "is_configured": true, 00:08:59.420 "data_offset": 2048, 00:08:59.420 "data_size": 63488 00:08:59.420 }, 00:08:59.420 { 00:08:59.420 "name": "BaseBdev3", 00:08:59.420 "uuid": "69805fa9-8d3a-53c7-9cd8-6076d30b4612", 00:08:59.420 "is_configured": true, 00:08:59.420 "data_offset": 2048, 00:08:59.420 "data_size": 63488 00:08:59.420 } 00:08:59.420 ] 00:08:59.420 }' 00:08:59.420 18:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.420 18:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.689 18:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:59.689 18:55:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:59.947 [2024-11-26 18:55:51.147861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.884 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.885 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.885 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.885 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.885 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.885 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.885 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.885 "name": "raid_bdev1", 00:09:00.885 "uuid": "5ea7a6be-a710-4e9c-b68e-3583d029ac28", 00:09:00.885 "strip_size_kb": 64, 00:09:00.885 "state": "online", 00:09:00.885 "raid_level": "raid0", 00:09:00.885 "superblock": true, 00:09:00.885 "num_base_bdevs": 3, 00:09:00.885 "num_base_bdevs_discovered": 3, 00:09:00.885 "num_base_bdevs_operational": 3, 00:09:00.885 "base_bdevs_list": [ 00:09:00.885 { 00:09:00.885 "name": "BaseBdev1", 00:09:00.885 "uuid": "1232379c-1ef9-589b-a219-84e5ea0ec8d4", 00:09:00.885 "is_configured": true, 00:09:00.885 "data_offset": 2048, 00:09:00.885 "data_size": 63488 00:09:00.885 }, 00:09:00.885 { 00:09:00.885 "name": "BaseBdev2", 00:09:00.885 "uuid": "5360499a-f64e-56f4-886b-3bd9084bcb95", 00:09:00.885 "is_configured": true, 00:09:00.885 "data_offset": 2048, 00:09:00.885 "data_size": 63488 00:09:00.885 }, 00:09:00.885 { 00:09:00.885 "name": "BaseBdev3", 00:09:00.885 "uuid": "69805fa9-8d3a-53c7-9cd8-6076d30b4612", 00:09:00.885 "is_configured": true, 00:09:00.885 "data_offset": 2048, 00:09:00.885 "data_size": 63488 00:09:00.885 } 00:09:00.885 ] 00:09:00.885 }' 00:09:00.885 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.885 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.452 [2024-11-26 18:55:52.567984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.452 [2024-11-26 18:55:52.568030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.452 [2024-11-26 18:55:52.571432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.452 [2024-11-26 18:55:52.571632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.452 [2024-11-26 18:55:52.571710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.452 [2024-11-26 18:55:52.571727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:01.452 { 00:09:01.452 "results": [ 00:09:01.452 { 00:09:01.452 "job": "raid_bdev1", 00:09:01.452 "core_mask": "0x1", 00:09:01.452 "workload": "randrw", 00:09:01.452 "percentage": 50, 00:09:01.452 "status": "finished", 00:09:01.452 "queue_depth": 1, 00:09:01.452 "io_size": 131072, 00:09:01.452 "runtime": 1.417529, 00:09:01.452 "iops": 10291.147482697002, 00:09:01.452 "mibps": 1286.3934353371253, 00:09:01.452 "io_failed": 1, 00:09:01.452 "io_timeout": 0, 00:09:01.452 "avg_latency_us": 135.7713784358078, 00:09:01.452 "min_latency_us": 27.345454545454544, 00:09:01.452 "max_latency_us": 1936.290909090909 00:09:01.452 } 00:09:01.452 ], 00:09:01.452 "core_count": 1 00:09:01.452 } 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65358 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65358 ']' 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65358 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65358 00:09:01.452 killing process with pid 65358 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65358' 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65358 00:09:01.452 [2024-11-26 18:55:52.602363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.452 18:55:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65358 00:09:01.452 [2024-11-26 18:55:52.814162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yaUEMar4PV 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:02.827 ************************************ 00:09:02.827 END TEST raid_read_error_test 00:09:02.827 ************************************ 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:02.827 00:09:02.827 real 0m4.897s 00:09:02.827 user 0m6.135s 00:09:02.827 sys 0m0.599s 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.827 18:55:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.827 18:55:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:02.827 18:55:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.827 18:55:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.827 18:55:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.827 ************************************ 00:09:02.827 START TEST raid_write_error_test 00:09:02.827 ************************************ 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rooSBOaeM8 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65511 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65511 00:09:02.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65511 ']' 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.827 18:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.827 [2024-11-26 18:55:54.146930] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:09:02.827 [2024-11-26 18:55:54.147117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65511 ] 00:09:03.086 [2024-11-26 18:55:54.334390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.345 [2024-11-26 18:55:54.468057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.345 [2024-11-26 18:55:54.694302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.345 [2024-11-26 18:55:54.694353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.913 BaseBdev1_malloc 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.913 true 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.913 [2024-11-26 18:55:55.219292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:03.913 [2024-11-26 18:55:55.219367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.913 [2024-11-26 18:55:55.219398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:03.913 [2024-11-26 18:55:55.219417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.913 [2024-11-26 18:55:55.222287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.913 [2024-11-26 18:55:55.222340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:03.913 BaseBdev1 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.913 BaseBdev2_malloc 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:03.913 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.914 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.173 true 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.173 [2024-11-26 18:55:55.284711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:04.173 [2024-11-26 18:55:55.284941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.173 [2024-11-26 18:55:55.284979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:04.173 [2024-11-26 18:55:55.284999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.173 [2024-11-26 18:55:55.287885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.173 [2024-11-26 18:55:55.288100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:04.173 BaseBdev2 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.173 BaseBdev3_malloc 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.173 true 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.173 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.173 [2024-11-26 18:55:55.357092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:04.173 [2024-11-26 18:55:55.357164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.173 [2024-11-26 18:55:55.357192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:04.173 [2024-11-26 18:55:55.357210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.173 [2024-11-26 18:55:55.360359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.173 [2024-11-26 18:55:55.360408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:04.174 BaseBdev3 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.174 [2024-11-26 18:55:55.369344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.174 [2024-11-26 18:55:55.371928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.174 [2024-11-26 18:55:55.372052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.174 [2024-11-26 18:55:55.372347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:04.174 [2024-11-26 18:55:55.372367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.174 [2024-11-26 18:55:55.372688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:04.174 [2024-11-26 18:55:55.372919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:04.174 [2024-11-26 18:55:55.372974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:04.174 [2024-11-26 18:55:55.373214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.174 "name": "raid_bdev1", 00:09:04.174 "uuid": "0ed1e0e1-407d-4948-b6f2-33e523991004", 00:09:04.174 "strip_size_kb": 64, 00:09:04.174 "state": "online", 00:09:04.174 "raid_level": "raid0", 00:09:04.174 "superblock": true, 00:09:04.174 "num_base_bdevs": 3, 00:09:04.174 "num_base_bdevs_discovered": 3, 00:09:04.174 "num_base_bdevs_operational": 3, 00:09:04.174 "base_bdevs_list": [ 00:09:04.174 { 00:09:04.174 "name": "BaseBdev1", 00:09:04.174 "uuid": "aa75f9e3-e87e-59b1-affd-5fea0f086a36", 00:09:04.174 "is_configured": true, 00:09:04.174 "data_offset": 2048, 00:09:04.174 "data_size": 63488 00:09:04.174 }, 00:09:04.174 { 00:09:04.174 "name": "BaseBdev2", 00:09:04.174 "uuid": "b261675f-ce00-5ca9-ae54-74504c161737", 00:09:04.174 "is_configured": true, 00:09:04.174 "data_offset": 2048, 00:09:04.174 "data_size": 63488 00:09:04.174 }, 00:09:04.174 { 00:09:04.174 "name": "BaseBdev3", 00:09:04.174 "uuid": "ce327c76-32cd-56f8-873b-c8fc1d795317", 00:09:04.174 "is_configured": true, 00:09:04.174 "data_offset": 2048, 00:09:04.174 "data_size": 63488 00:09:04.174 } 00:09:04.174 ] 00:09:04.174 }' 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.174 18:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.742 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:04.742 18:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:04.742 [2024-11-26 18:55:55.995026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.678 "name": "raid_bdev1", 00:09:05.678 "uuid": "0ed1e0e1-407d-4948-b6f2-33e523991004", 00:09:05.678 "strip_size_kb": 64, 00:09:05.678 "state": "online", 00:09:05.678 "raid_level": "raid0", 00:09:05.678 "superblock": true, 00:09:05.678 "num_base_bdevs": 3, 00:09:05.678 "num_base_bdevs_discovered": 3, 00:09:05.678 "num_base_bdevs_operational": 3, 00:09:05.678 "base_bdevs_list": [ 00:09:05.678 { 00:09:05.678 "name": "BaseBdev1", 00:09:05.678 "uuid": "aa75f9e3-e87e-59b1-affd-5fea0f086a36", 00:09:05.678 "is_configured": true, 00:09:05.678 "data_offset": 2048, 00:09:05.678 "data_size": 63488 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "name": "BaseBdev2", 00:09:05.678 "uuid": "b261675f-ce00-5ca9-ae54-74504c161737", 00:09:05.678 "is_configured": true, 00:09:05.678 "data_offset": 2048, 00:09:05.678 "data_size": 63488 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "name": "BaseBdev3", 00:09:05.678 "uuid": "ce327c76-32cd-56f8-873b-c8fc1d795317", 00:09:05.678 "is_configured": true, 00:09:05.678 "data_offset": 2048, 00:09:05.678 "data_size": 63488 00:09:05.678 } 00:09:05.678 ] 00:09:05.678 }' 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.678 18:55:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.246 18:55:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:06.246 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.246 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.246 [2024-11-26 18:55:57.399268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.246 [2024-11-26 18:55:57.399484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.246 [2024-11-26 18:55:57.403889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.246 { 00:09:06.246 "results": [ 00:09:06.246 { 00:09:06.246 "job": "raid_bdev1", 00:09:06.246 "core_mask": "0x1", 00:09:06.246 "workload": "randrw", 00:09:06.246 "percentage": 50, 00:09:06.246 "status": "finished", 00:09:06.246 "queue_depth": 1, 00:09:06.246 "io_size": 131072, 00:09:06.246 "runtime": 1.401936, 00:09:06.246 "iops": 9995.46341630431, 00:09:06.246 "mibps": 1249.4329270380388, 00:09:06.246 "io_failed": 1, 00:09:06.246 "io_timeout": 0, 00:09:06.246 "avg_latency_us": 139.52836981200616, 00:09:06.246 "min_latency_us": 38.167272727272724, 00:09:06.246 "max_latency_us": 1936.290909090909 00:09:06.246 } 00:09:06.246 ], 00:09:06.246 "core_count": 1 00:09:06.246 } 00:09:06.246 [2024-11-26 18:55:57.404228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.246 [2024-11-26 18:55:57.404327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.247 [2024-11-26 18:55:57.404351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65511 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65511 ']' 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65511 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65511 00:09:06.247 killing process with pid 65511 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65511' 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65511 00:09:06.247 [2024-11-26 18:55:57.445272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.247 18:55:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65511 00:09:06.505 [2024-11-26 18:55:57.656418] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rooSBOaeM8 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:07.443 ************************************ 00:09:07.443 END TEST raid_write_error_test 00:09:07.443 ************************************ 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:07.443 00:09:07.443 real 0m4.729s 00:09:07.443 user 0m5.874s 00:09:07.443 sys 0m0.594s 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.443 18:55:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.443 18:55:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:07.443 18:55:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:07.443 18:55:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:07.443 18:55:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.443 18:55:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.443 ************************************ 00:09:07.443 START TEST raid_state_function_test 00:09:07.443 ************************************ 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:07.443 Process raid pid: 65649 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65649 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65649' 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65649 00:09:07.443 18:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.702 18:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65649 ']' 00:09:07.703 18:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.703 18:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.703 18:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.703 18:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.703 18:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.703 [2024-11-26 18:55:58.911840] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:09:07.703 [2024-11-26 18:55:58.912317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.960 [2024-11-26 18:55:59.099568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.960 [2024-11-26 18:55:59.236737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.219 [2024-11-26 18:55:59.441935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.219 [2024-11-26 18:55:59.441987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.788 [2024-11-26 18:55:59.920555] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.788 [2024-11-26 18:55:59.920636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.788 [2024-11-26 18:55:59.920654] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.788 [2024-11-26 18:55:59.920670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.788 [2024-11-26 18:55:59.920679] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.788 [2024-11-26 18:55:59.920693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.788 "name": "Existed_Raid", 00:09:08.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.788 "strip_size_kb": 64, 00:09:08.788 "state": "configuring", 00:09:08.788 "raid_level": "concat", 00:09:08.788 "superblock": false, 00:09:08.788 "num_base_bdevs": 3, 00:09:08.788 "num_base_bdevs_discovered": 0, 00:09:08.788 "num_base_bdevs_operational": 3, 00:09:08.788 "base_bdevs_list": [ 00:09:08.788 { 00:09:08.788 "name": "BaseBdev1", 00:09:08.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.788 "is_configured": false, 00:09:08.788 "data_offset": 0, 00:09:08.788 "data_size": 0 00:09:08.788 }, 00:09:08.788 { 00:09:08.788 "name": "BaseBdev2", 00:09:08.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.788 "is_configured": false, 00:09:08.788 "data_offset": 0, 00:09:08.788 "data_size": 0 00:09:08.788 }, 00:09:08.788 { 00:09:08.788 "name": "BaseBdev3", 00:09:08.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.788 "is_configured": false, 00:09:08.788 "data_offset": 0, 00:09:08.788 "data_size": 0 00:09:08.788 } 00:09:08.788 ] 00:09:08.788 }' 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.788 18:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.365 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.365 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.365 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.365 [2024-11-26 18:56:00.464675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.365 [2024-11-26 18:56:00.464721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.365 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.365 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.365 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.365 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.365 [2024-11-26 18:56:00.476650] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.365 [2024-11-26 18:56:00.476874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.365 [2024-11-26 18:56:00.477021] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.365 [2024-11-26 18:56:00.477086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.366 [2024-11-26 18:56:00.477193] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.366 [2024-11-26 18:56:00.477348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.366 [2024-11-26 18:56:00.521889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.366 BaseBdev1 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.366 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.367 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.367 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.367 [ 00:09:09.367 { 00:09:09.367 "name": "BaseBdev1", 00:09:09.367 "aliases": [ 00:09:09.367 "2b0dba12-fb8f-4268-b8cb-847f2a84b288" 00:09:09.367 ], 00:09:09.367 "product_name": "Malloc disk", 00:09:09.367 "block_size": 512, 00:09:09.367 "num_blocks": 65536, 00:09:09.367 "uuid": "2b0dba12-fb8f-4268-b8cb-847f2a84b288", 00:09:09.367 "assigned_rate_limits": { 00:09:09.367 "rw_ios_per_sec": 0, 00:09:09.367 "rw_mbytes_per_sec": 0, 00:09:09.367 "r_mbytes_per_sec": 0, 00:09:09.367 "w_mbytes_per_sec": 0 00:09:09.367 }, 00:09:09.367 "claimed": true, 00:09:09.367 "claim_type": "exclusive_write", 00:09:09.367 "zoned": false, 00:09:09.367 "supported_io_types": { 00:09:09.367 "read": true, 00:09:09.367 "write": true, 00:09:09.367 "unmap": true, 00:09:09.367 "flush": true, 00:09:09.367 "reset": true, 00:09:09.367 "nvme_admin": false, 00:09:09.367 "nvme_io": false, 00:09:09.367 "nvme_io_md": false, 00:09:09.367 "write_zeroes": true, 00:09:09.367 "zcopy": true, 00:09:09.367 "get_zone_info": false, 00:09:09.367 "zone_management": false, 00:09:09.367 "zone_append": false, 00:09:09.367 "compare": false, 00:09:09.367 "compare_and_write": false, 00:09:09.367 "abort": true, 00:09:09.367 "seek_hole": false, 00:09:09.367 "seek_data": false, 00:09:09.367 "copy": true, 00:09:09.367 "nvme_iov_md": false 00:09:09.367 }, 00:09:09.367 "memory_domains": [ 00:09:09.367 { 00:09:09.367 "dma_device_id": "system", 00:09:09.367 "dma_device_type": 1 00:09:09.367 }, 00:09:09.367 { 00:09:09.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.367 "dma_device_type": 2 00:09:09.367 } 00:09:09.367 ], 00:09:09.367 "driver_specific": {} 00:09:09.368 } 00:09:09.368 ] 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.368 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.368 "name": "Existed_Raid", 00:09:09.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.368 "strip_size_kb": 64, 00:09:09.368 "state": "configuring", 00:09:09.368 "raid_level": "concat", 00:09:09.368 "superblock": false, 00:09:09.368 "num_base_bdevs": 3, 00:09:09.368 "num_base_bdevs_discovered": 1, 00:09:09.368 "num_base_bdevs_operational": 3, 00:09:09.368 "base_bdevs_list": [ 00:09:09.368 { 00:09:09.368 "name": "BaseBdev1", 00:09:09.368 "uuid": "2b0dba12-fb8f-4268-b8cb-847f2a84b288", 00:09:09.368 "is_configured": true, 00:09:09.371 "data_offset": 0, 00:09:09.371 "data_size": 65536 00:09:09.371 }, 00:09:09.371 { 00:09:09.371 "name": "BaseBdev2", 00:09:09.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.371 "is_configured": false, 00:09:09.371 "data_offset": 0, 00:09:09.371 "data_size": 0 00:09:09.371 }, 00:09:09.371 { 00:09:09.371 "name": "BaseBdev3", 00:09:09.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.371 "is_configured": false, 00:09:09.371 "data_offset": 0, 00:09:09.371 "data_size": 0 00:09:09.371 } 00:09:09.371 ] 00:09:09.371 }' 00:09:09.371 18:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.371 18:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.948 [2024-11-26 18:56:01.126204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.948 [2024-11-26 18:56:01.126315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.948 [2024-11-26 18:56:01.134220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.948 [2024-11-26 18:56:01.136965] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.948 [2024-11-26 18:56:01.137220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.948 [2024-11-26 18:56:01.137251] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.948 [2024-11-26 18:56:01.137270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.948 "name": "Existed_Raid", 00:09:09.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.948 "strip_size_kb": 64, 00:09:09.948 "state": "configuring", 00:09:09.948 "raid_level": "concat", 00:09:09.948 "superblock": false, 00:09:09.948 "num_base_bdevs": 3, 00:09:09.948 "num_base_bdevs_discovered": 1, 00:09:09.948 "num_base_bdevs_operational": 3, 00:09:09.948 "base_bdevs_list": [ 00:09:09.948 { 00:09:09.948 "name": "BaseBdev1", 00:09:09.948 "uuid": "2b0dba12-fb8f-4268-b8cb-847f2a84b288", 00:09:09.948 "is_configured": true, 00:09:09.948 "data_offset": 0, 00:09:09.948 "data_size": 65536 00:09:09.948 }, 00:09:09.948 { 00:09:09.948 "name": "BaseBdev2", 00:09:09.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.948 "is_configured": false, 00:09:09.948 "data_offset": 0, 00:09:09.948 "data_size": 0 00:09:09.948 }, 00:09:09.948 { 00:09:09.948 "name": "BaseBdev3", 00:09:09.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.948 "is_configured": false, 00:09:09.948 "data_offset": 0, 00:09:09.948 "data_size": 0 00:09:09.948 } 00:09:09.948 ] 00:09:09.948 }' 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.948 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.517 [2024-11-26 18:56:01.669829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.517 BaseBdev2 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.517 [ 00:09:10.517 { 00:09:10.517 "name": "BaseBdev2", 00:09:10.517 "aliases": [ 00:09:10.517 "0cdc67e2-4989-47f3-9a1b-575976548dd8" 00:09:10.517 ], 00:09:10.517 "product_name": "Malloc disk", 00:09:10.517 "block_size": 512, 00:09:10.517 "num_blocks": 65536, 00:09:10.517 "uuid": "0cdc67e2-4989-47f3-9a1b-575976548dd8", 00:09:10.517 "assigned_rate_limits": { 00:09:10.517 "rw_ios_per_sec": 0, 00:09:10.517 "rw_mbytes_per_sec": 0, 00:09:10.517 "r_mbytes_per_sec": 0, 00:09:10.517 "w_mbytes_per_sec": 0 00:09:10.517 }, 00:09:10.517 "claimed": true, 00:09:10.517 "claim_type": "exclusive_write", 00:09:10.517 "zoned": false, 00:09:10.517 "supported_io_types": { 00:09:10.517 "read": true, 00:09:10.517 "write": true, 00:09:10.517 "unmap": true, 00:09:10.517 "flush": true, 00:09:10.517 "reset": true, 00:09:10.517 "nvme_admin": false, 00:09:10.517 "nvme_io": false, 00:09:10.517 "nvme_io_md": false, 00:09:10.517 "write_zeroes": true, 00:09:10.517 "zcopy": true, 00:09:10.517 "get_zone_info": false, 00:09:10.517 "zone_management": false, 00:09:10.517 "zone_append": false, 00:09:10.517 "compare": false, 00:09:10.517 "compare_and_write": false, 00:09:10.517 "abort": true, 00:09:10.517 "seek_hole": false, 00:09:10.517 "seek_data": false, 00:09:10.517 "copy": true, 00:09:10.517 "nvme_iov_md": false 00:09:10.517 }, 00:09:10.517 "memory_domains": [ 00:09:10.517 { 00:09:10.517 "dma_device_id": "system", 00:09:10.517 "dma_device_type": 1 00:09:10.517 }, 00:09:10.517 { 00:09:10.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.517 "dma_device_type": 2 00:09:10.517 } 00:09:10.517 ], 00:09:10.517 "driver_specific": {} 00:09:10.517 } 00:09:10.517 ] 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.517 "name": "Existed_Raid", 00:09:10.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.517 "strip_size_kb": 64, 00:09:10.517 "state": "configuring", 00:09:10.517 "raid_level": "concat", 00:09:10.517 "superblock": false, 00:09:10.517 "num_base_bdevs": 3, 00:09:10.517 "num_base_bdevs_discovered": 2, 00:09:10.517 "num_base_bdevs_operational": 3, 00:09:10.517 "base_bdevs_list": [ 00:09:10.517 { 00:09:10.517 "name": "BaseBdev1", 00:09:10.517 "uuid": "2b0dba12-fb8f-4268-b8cb-847f2a84b288", 00:09:10.517 "is_configured": true, 00:09:10.517 "data_offset": 0, 00:09:10.517 "data_size": 65536 00:09:10.517 }, 00:09:10.517 { 00:09:10.517 "name": "BaseBdev2", 00:09:10.517 "uuid": "0cdc67e2-4989-47f3-9a1b-575976548dd8", 00:09:10.517 "is_configured": true, 00:09:10.517 "data_offset": 0, 00:09:10.517 "data_size": 65536 00:09:10.517 }, 00:09:10.517 { 00:09:10.517 "name": "BaseBdev3", 00:09:10.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.517 "is_configured": false, 00:09:10.517 "data_offset": 0, 00:09:10.517 "data_size": 0 00:09:10.517 } 00:09:10.517 ] 00:09:10.517 }' 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.517 18:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.085 [2024-11-26 18:56:02.275216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.085 [2024-11-26 18:56:02.275549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:11.085 [2024-11-26 18:56:02.275610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:11.085 [2024-11-26 18:56:02.276124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:11.085 [2024-11-26 18:56:02.276480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:11.085 [2024-11-26 18:56:02.276524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:11.085 [2024-11-26 18:56:02.277017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.085 BaseBdev3 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.085 [ 00:09:11.085 { 00:09:11.085 "name": "BaseBdev3", 00:09:11.085 "aliases": [ 00:09:11.085 "44a6e358-2df6-478b-be1a-130b9f61072e" 00:09:11.085 ], 00:09:11.085 "product_name": "Malloc disk", 00:09:11.085 "block_size": 512, 00:09:11.085 "num_blocks": 65536, 00:09:11.085 "uuid": "44a6e358-2df6-478b-be1a-130b9f61072e", 00:09:11.085 "assigned_rate_limits": { 00:09:11.085 "rw_ios_per_sec": 0, 00:09:11.085 "rw_mbytes_per_sec": 0, 00:09:11.085 "r_mbytes_per_sec": 0, 00:09:11.085 "w_mbytes_per_sec": 0 00:09:11.085 }, 00:09:11.085 "claimed": true, 00:09:11.085 "claim_type": "exclusive_write", 00:09:11.085 "zoned": false, 00:09:11.085 "supported_io_types": { 00:09:11.085 "read": true, 00:09:11.085 "write": true, 00:09:11.085 "unmap": true, 00:09:11.085 "flush": true, 00:09:11.085 "reset": true, 00:09:11.085 "nvme_admin": false, 00:09:11.085 "nvme_io": false, 00:09:11.085 "nvme_io_md": false, 00:09:11.085 "write_zeroes": true, 00:09:11.085 "zcopy": true, 00:09:11.085 "get_zone_info": false, 00:09:11.085 "zone_management": false, 00:09:11.085 "zone_append": false, 00:09:11.085 "compare": false, 00:09:11.085 "compare_and_write": false, 00:09:11.085 "abort": true, 00:09:11.085 "seek_hole": false, 00:09:11.085 "seek_data": false, 00:09:11.085 "copy": true, 00:09:11.085 "nvme_iov_md": false 00:09:11.085 }, 00:09:11.085 "memory_domains": [ 00:09:11.085 { 00:09:11.085 "dma_device_id": "system", 00:09:11.085 "dma_device_type": 1 00:09:11.085 }, 00:09:11.085 { 00:09:11.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.085 "dma_device_type": 2 00:09:11.085 } 00:09:11.085 ], 00:09:11.085 "driver_specific": {} 00:09:11.085 } 00:09:11.085 ] 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.085 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.085 "name": "Existed_Raid", 00:09:11.085 "uuid": "1594a507-ac27-4dbc-a414-386386255878", 00:09:11.085 "strip_size_kb": 64, 00:09:11.085 "state": "online", 00:09:11.085 "raid_level": "concat", 00:09:11.085 "superblock": false, 00:09:11.085 "num_base_bdevs": 3, 00:09:11.085 "num_base_bdevs_discovered": 3, 00:09:11.085 "num_base_bdevs_operational": 3, 00:09:11.085 "base_bdevs_list": [ 00:09:11.085 { 00:09:11.085 "name": "BaseBdev1", 00:09:11.085 "uuid": "2b0dba12-fb8f-4268-b8cb-847f2a84b288", 00:09:11.085 "is_configured": true, 00:09:11.085 "data_offset": 0, 00:09:11.085 "data_size": 65536 00:09:11.085 }, 00:09:11.085 { 00:09:11.085 "name": "BaseBdev2", 00:09:11.085 "uuid": "0cdc67e2-4989-47f3-9a1b-575976548dd8", 00:09:11.086 "is_configured": true, 00:09:11.086 "data_offset": 0, 00:09:11.086 "data_size": 65536 00:09:11.086 }, 00:09:11.086 { 00:09:11.086 "name": "BaseBdev3", 00:09:11.086 "uuid": "44a6e358-2df6-478b-be1a-130b9f61072e", 00:09:11.086 "is_configured": true, 00:09:11.086 "data_offset": 0, 00:09:11.086 "data_size": 65536 00:09:11.086 } 00:09:11.086 ] 00:09:11.086 }' 00:09:11.086 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.086 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.652 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.652 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.653 [2024-11-26 18:56:02.839848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.653 "name": "Existed_Raid", 00:09:11.653 "aliases": [ 00:09:11.653 "1594a507-ac27-4dbc-a414-386386255878" 00:09:11.653 ], 00:09:11.653 "product_name": "Raid Volume", 00:09:11.653 "block_size": 512, 00:09:11.653 "num_blocks": 196608, 00:09:11.653 "uuid": "1594a507-ac27-4dbc-a414-386386255878", 00:09:11.653 "assigned_rate_limits": { 00:09:11.653 "rw_ios_per_sec": 0, 00:09:11.653 "rw_mbytes_per_sec": 0, 00:09:11.653 "r_mbytes_per_sec": 0, 00:09:11.653 "w_mbytes_per_sec": 0 00:09:11.653 }, 00:09:11.653 "claimed": false, 00:09:11.653 "zoned": false, 00:09:11.653 "supported_io_types": { 00:09:11.653 "read": true, 00:09:11.653 "write": true, 00:09:11.653 "unmap": true, 00:09:11.653 "flush": true, 00:09:11.653 "reset": true, 00:09:11.653 "nvme_admin": false, 00:09:11.653 "nvme_io": false, 00:09:11.653 "nvme_io_md": false, 00:09:11.653 "write_zeroes": true, 00:09:11.653 "zcopy": false, 00:09:11.653 "get_zone_info": false, 00:09:11.653 "zone_management": false, 00:09:11.653 "zone_append": false, 00:09:11.653 "compare": false, 00:09:11.653 "compare_and_write": false, 00:09:11.653 "abort": false, 00:09:11.653 "seek_hole": false, 00:09:11.653 "seek_data": false, 00:09:11.653 "copy": false, 00:09:11.653 "nvme_iov_md": false 00:09:11.653 }, 00:09:11.653 "memory_domains": [ 00:09:11.653 { 00:09:11.653 "dma_device_id": "system", 00:09:11.653 "dma_device_type": 1 00:09:11.653 }, 00:09:11.653 { 00:09:11.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.653 "dma_device_type": 2 00:09:11.653 }, 00:09:11.653 { 00:09:11.653 "dma_device_id": "system", 00:09:11.653 "dma_device_type": 1 00:09:11.653 }, 00:09:11.653 { 00:09:11.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.653 "dma_device_type": 2 00:09:11.653 }, 00:09:11.653 { 00:09:11.653 "dma_device_id": "system", 00:09:11.653 "dma_device_type": 1 00:09:11.653 }, 00:09:11.653 { 00:09:11.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.653 "dma_device_type": 2 00:09:11.653 } 00:09:11.653 ], 00:09:11.653 "driver_specific": { 00:09:11.653 "raid": { 00:09:11.653 "uuid": "1594a507-ac27-4dbc-a414-386386255878", 00:09:11.653 "strip_size_kb": 64, 00:09:11.653 "state": "online", 00:09:11.653 "raid_level": "concat", 00:09:11.653 "superblock": false, 00:09:11.653 "num_base_bdevs": 3, 00:09:11.653 "num_base_bdevs_discovered": 3, 00:09:11.653 "num_base_bdevs_operational": 3, 00:09:11.653 "base_bdevs_list": [ 00:09:11.653 { 00:09:11.653 "name": "BaseBdev1", 00:09:11.653 "uuid": "2b0dba12-fb8f-4268-b8cb-847f2a84b288", 00:09:11.653 "is_configured": true, 00:09:11.653 "data_offset": 0, 00:09:11.653 "data_size": 65536 00:09:11.653 }, 00:09:11.653 { 00:09:11.653 "name": "BaseBdev2", 00:09:11.653 "uuid": "0cdc67e2-4989-47f3-9a1b-575976548dd8", 00:09:11.653 "is_configured": true, 00:09:11.653 "data_offset": 0, 00:09:11.653 "data_size": 65536 00:09:11.653 }, 00:09:11.653 { 00:09:11.653 "name": "BaseBdev3", 00:09:11.653 "uuid": "44a6e358-2df6-478b-be1a-130b9f61072e", 00:09:11.653 "is_configured": true, 00:09:11.653 "data_offset": 0, 00:09:11.653 "data_size": 65536 00:09:11.653 } 00:09:11.653 ] 00:09:11.653 } 00:09:11.653 } 00:09:11.653 }' 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.653 BaseBdev2 00:09:11.653 BaseBdev3' 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.653 18:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.654 18:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.654 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.912 [2024-11-26 18:56:03.151635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.912 [2024-11-26 18:56:03.151674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.912 [2024-11-26 18:56:03.151753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.912 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.170 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.170 "name": "Existed_Raid", 00:09:12.170 "uuid": "1594a507-ac27-4dbc-a414-386386255878", 00:09:12.170 "strip_size_kb": 64, 00:09:12.170 "state": "offline", 00:09:12.170 "raid_level": "concat", 00:09:12.170 "superblock": false, 00:09:12.170 "num_base_bdevs": 3, 00:09:12.170 "num_base_bdevs_discovered": 2, 00:09:12.170 "num_base_bdevs_operational": 2, 00:09:12.170 "base_bdevs_list": [ 00:09:12.170 { 00:09:12.170 "name": null, 00:09:12.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.170 "is_configured": false, 00:09:12.170 "data_offset": 0, 00:09:12.170 "data_size": 65536 00:09:12.170 }, 00:09:12.170 { 00:09:12.170 "name": "BaseBdev2", 00:09:12.170 "uuid": "0cdc67e2-4989-47f3-9a1b-575976548dd8", 00:09:12.170 "is_configured": true, 00:09:12.170 "data_offset": 0, 00:09:12.170 "data_size": 65536 00:09:12.170 }, 00:09:12.170 { 00:09:12.170 "name": "BaseBdev3", 00:09:12.170 "uuid": "44a6e358-2df6-478b-be1a-130b9f61072e", 00:09:12.170 "is_configured": true, 00:09:12.170 "data_offset": 0, 00:09:12.170 "data_size": 65536 00:09:12.170 } 00:09:12.170 ] 00:09:12.170 }' 00:09:12.170 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.170 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.429 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:12.429 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.429 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.429 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.429 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.429 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.429 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.688 [2024-11-26 18:56:03.820506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.688 18:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.688 [2024-11-26 18:56:03.959548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.688 [2024-11-26 18:56:03.959618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.688 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.688 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.688 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.947 BaseBdev2 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.947 [ 00:09:12.947 { 00:09:12.947 "name": "BaseBdev2", 00:09:12.947 "aliases": [ 00:09:12.947 "f3bda511-00f5-4bdb-9faf-395a8ce8fdde" 00:09:12.947 ], 00:09:12.947 "product_name": "Malloc disk", 00:09:12.947 "block_size": 512, 00:09:12.947 "num_blocks": 65536, 00:09:12.947 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:12.947 "assigned_rate_limits": { 00:09:12.947 "rw_ios_per_sec": 0, 00:09:12.947 "rw_mbytes_per_sec": 0, 00:09:12.947 "r_mbytes_per_sec": 0, 00:09:12.947 "w_mbytes_per_sec": 0 00:09:12.947 }, 00:09:12.947 "claimed": false, 00:09:12.947 "zoned": false, 00:09:12.947 "supported_io_types": { 00:09:12.947 "read": true, 00:09:12.947 "write": true, 00:09:12.947 "unmap": true, 00:09:12.947 "flush": true, 00:09:12.947 "reset": true, 00:09:12.947 "nvme_admin": false, 00:09:12.947 "nvme_io": false, 00:09:12.947 "nvme_io_md": false, 00:09:12.947 "write_zeroes": true, 00:09:12.947 "zcopy": true, 00:09:12.947 "get_zone_info": false, 00:09:12.947 "zone_management": false, 00:09:12.947 "zone_append": false, 00:09:12.947 "compare": false, 00:09:12.947 "compare_and_write": false, 00:09:12.947 "abort": true, 00:09:12.947 "seek_hole": false, 00:09:12.947 "seek_data": false, 00:09:12.947 "copy": true, 00:09:12.947 "nvme_iov_md": false 00:09:12.947 }, 00:09:12.947 "memory_domains": [ 00:09:12.947 { 00:09:12.947 "dma_device_id": "system", 00:09:12.947 "dma_device_type": 1 00:09:12.947 }, 00:09:12.947 { 00:09:12.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.947 "dma_device_type": 2 00:09:12.947 } 00:09:12.947 ], 00:09:12.947 "driver_specific": {} 00:09:12.947 } 00:09:12.947 ] 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.947 BaseBdev3 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.947 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.948 [ 00:09:12.948 { 00:09:12.948 "name": "BaseBdev3", 00:09:12.948 "aliases": [ 00:09:12.948 "a568638d-c91c-46b7-b5c7-5e82a9bccca1" 00:09:12.948 ], 00:09:12.948 "product_name": "Malloc disk", 00:09:12.948 "block_size": 512, 00:09:12.948 "num_blocks": 65536, 00:09:12.948 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:12.948 "assigned_rate_limits": { 00:09:12.948 "rw_ios_per_sec": 0, 00:09:12.948 "rw_mbytes_per_sec": 0, 00:09:12.948 "r_mbytes_per_sec": 0, 00:09:12.948 "w_mbytes_per_sec": 0 00:09:12.948 }, 00:09:12.948 "claimed": false, 00:09:12.948 "zoned": false, 00:09:12.948 "supported_io_types": { 00:09:12.948 "read": true, 00:09:12.948 "write": true, 00:09:12.948 "unmap": true, 00:09:12.948 "flush": true, 00:09:12.948 "reset": true, 00:09:12.948 "nvme_admin": false, 00:09:12.948 "nvme_io": false, 00:09:12.948 "nvme_io_md": false, 00:09:12.948 "write_zeroes": true, 00:09:12.948 "zcopy": true, 00:09:12.948 "get_zone_info": false, 00:09:12.948 "zone_management": false, 00:09:12.948 "zone_append": false, 00:09:12.948 "compare": false, 00:09:12.948 "compare_and_write": false, 00:09:12.948 "abort": true, 00:09:12.948 "seek_hole": false, 00:09:12.948 "seek_data": false, 00:09:12.948 "copy": true, 00:09:12.948 "nvme_iov_md": false 00:09:12.948 }, 00:09:12.948 "memory_domains": [ 00:09:12.948 { 00:09:12.948 "dma_device_id": "system", 00:09:12.948 "dma_device_type": 1 00:09:12.948 }, 00:09:12.948 { 00:09:12.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.948 "dma_device_type": 2 00:09:12.948 } 00:09:12.948 ], 00:09:12.948 "driver_specific": {} 00:09:12.948 } 00:09:12.948 ] 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.948 [2024-11-26 18:56:04.259448] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.948 [2024-11-26 18:56:04.259505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.948 [2024-11-26 18:56:04.259541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.948 [2024-11-26 18:56:04.262082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.948 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.207 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.207 "name": "Existed_Raid", 00:09:13.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.207 "strip_size_kb": 64, 00:09:13.207 "state": "configuring", 00:09:13.207 "raid_level": "concat", 00:09:13.207 "superblock": false, 00:09:13.207 "num_base_bdevs": 3, 00:09:13.207 "num_base_bdevs_discovered": 2, 00:09:13.207 "num_base_bdevs_operational": 3, 00:09:13.207 "base_bdevs_list": [ 00:09:13.207 { 00:09:13.207 "name": "BaseBdev1", 00:09:13.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.207 "is_configured": false, 00:09:13.207 "data_offset": 0, 00:09:13.207 "data_size": 0 00:09:13.207 }, 00:09:13.207 { 00:09:13.207 "name": "BaseBdev2", 00:09:13.207 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:13.207 "is_configured": true, 00:09:13.207 "data_offset": 0, 00:09:13.207 "data_size": 65536 00:09:13.207 }, 00:09:13.207 { 00:09:13.207 "name": "BaseBdev3", 00:09:13.207 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:13.207 "is_configured": true, 00:09:13.207 "data_offset": 0, 00:09:13.207 "data_size": 65536 00:09:13.207 } 00:09:13.207 ] 00:09:13.207 }' 00:09:13.207 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.207 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.466 [2024-11-26 18:56:04.787629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.466 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.725 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.725 "name": "Existed_Raid", 00:09:13.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.725 "strip_size_kb": 64, 00:09:13.725 "state": "configuring", 00:09:13.725 "raid_level": "concat", 00:09:13.725 "superblock": false, 00:09:13.725 "num_base_bdevs": 3, 00:09:13.725 "num_base_bdevs_discovered": 1, 00:09:13.725 "num_base_bdevs_operational": 3, 00:09:13.725 "base_bdevs_list": [ 00:09:13.725 { 00:09:13.725 "name": "BaseBdev1", 00:09:13.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.725 "is_configured": false, 00:09:13.725 "data_offset": 0, 00:09:13.725 "data_size": 0 00:09:13.725 }, 00:09:13.725 { 00:09:13.725 "name": null, 00:09:13.725 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:13.725 "is_configured": false, 00:09:13.725 "data_offset": 0, 00:09:13.725 "data_size": 65536 00:09:13.725 }, 00:09:13.725 { 00:09:13.725 "name": "BaseBdev3", 00:09:13.725 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:13.725 "is_configured": true, 00:09:13.725 "data_offset": 0, 00:09:13.725 "data_size": 65536 00:09:13.725 } 00:09:13.725 ] 00:09:13.725 }' 00:09:13.725 18:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.725 18:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.983 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.983 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.983 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.983 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.983 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.268 [2024-11-26 18:56:05.410647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.268 BaseBdev1 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.268 [ 00:09:14.268 { 00:09:14.268 "name": "BaseBdev1", 00:09:14.268 "aliases": [ 00:09:14.268 "0166e75a-460d-42ec-a8e9-e833d98c029b" 00:09:14.268 ], 00:09:14.268 "product_name": "Malloc disk", 00:09:14.268 "block_size": 512, 00:09:14.268 "num_blocks": 65536, 00:09:14.268 "uuid": "0166e75a-460d-42ec-a8e9-e833d98c029b", 00:09:14.268 "assigned_rate_limits": { 00:09:14.268 "rw_ios_per_sec": 0, 00:09:14.268 "rw_mbytes_per_sec": 0, 00:09:14.268 "r_mbytes_per_sec": 0, 00:09:14.268 "w_mbytes_per_sec": 0 00:09:14.268 }, 00:09:14.268 "claimed": true, 00:09:14.268 "claim_type": "exclusive_write", 00:09:14.268 "zoned": false, 00:09:14.268 "supported_io_types": { 00:09:14.268 "read": true, 00:09:14.268 "write": true, 00:09:14.268 "unmap": true, 00:09:14.268 "flush": true, 00:09:14.268 "reset": true, 00:09:14.268 "nvme_admin": false, 00:09:14.268 "nvme_io": false, 00:09:14.268 "nvme_io_md": false, 00:09:14.268 "write_zeroes": true, 00:09:14.268 "zcopy": true, 00:09:14.268 "get_zone_info": false, 00:09:14.268 "zone_management": false, 00:09:14.268 "zone_append": false, 00:09:14.268 "compare": false, 00:09:14.268 "compare_and_write": false, 00:09:14.268 "abort": true, 00:09:14.268 "seek_hole": false, 00:09:14.268 "seek_data": false, 00:09:14.268 "copy": true, 00:09:14.268 "nvme_iov_md": false 00:09:14.268 }, 00:09:14.268 "memory_domains": [ 00:09:14.268 { 00:09:14.268 "dma_device_id": "system", 00:09:14.268 "dma_device_type": 1 00:09:14.268 }, 00:09:14.268 { 00:09:14.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.268 "dma_device_type": 2 00:09:14.268 } 00:09:14.268 ], 00:09:14.268 "driver_specific": {} 00:09:14.268 } 00:09:14.268 ] 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.268 "name": "Existed_Raid", 00:09:14.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.268 "strip_size_kb": 64, 00:09:14.268 "state": "configuring", 00:09:14.268 "raid_level": "concat", 00:09:14.268 "superblock": false, 00:09:14.268 "num_base_bdevs": 3, 00:09:14.268 "num_base_bdevs_discovered": 2, 00:09:14.268 "num_base_bdevs_operational": 3, 00:09:14.268 "base_bdevs_list": [ 00:09:14.268 { 00:09:14.268 "name": "BaseBdev1", 00:09:14.268 "uuid": "0166e75a-460d-42ec-a8e9-e833d98c029b", 00:09:14.268 "is_configured": true, 00:09:14.268 "data_offset": 0, 00:09:14.268 "data_size": 65536 00:09:14.268 }, 00:09:14.268 { 00:09:14.268 "name": null, 00:09:14.268 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:14.268 "is_configured": false, 00:09:14.268 "data_offset": 0, 00:09:14.268 "data_size": 65536 00:09:14.268 }, 00:09:14.268 { 00:09:14.268 "name": "BaseBdev3", 00:09:14.268 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:14.268 "is_configured": true, 00:09:14.268 "data_offset": 0, 00:09:14.268 "data_size": 65536 00:09:14.268 } 00:09:14.268 ] 00:09:14.268 }' 00:09:14.268 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.269 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.839 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.839 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.839 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.839 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.839 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.839 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:14.839 18:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:14.839 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.839 18:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.839 [2024-11-26 18:56:05.998878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.839 "name": "Existed_Raid", 00:09:14.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.839 "strip_size_kb": 64, 00:09:14.839 "state": "configuring", 00:09:14.839 "raid_level": "concat", 00:09:14.839 "superblock": false, 00:09:14.839 "num_base_bdevs": 3, 00:09:14.839 "num_base_bdevs_discovered": 1, 00:09:14.839 "num_base_bdevs_operational": 3, 00:09:14.839 "base_bdevs_list": [ 00:09:14.839 { 00:09:14.839 "name": "BaseBdev1", 00:09:14.839 "uuid": "0166e75a-460d-42ec-a8e9-e833d98c029b", 00:09:14.839 "is_configured": true, 00:09:14.839 "data_offset": 0, 00:09:14.839 "data_size": 65536 00:09:14.839 }, 00:09:14.839 { 00:09:14.839 "name": null, 00:09:14.839 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:14.839 "is_configured": false, 00:09:14.839 "data_offset": 0, 00:09:14.839 "data_size": 65536 00:09:14.839 }, 00:09:14.839 { 00:09:14.839 "name": null, 00:09:14.839 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:14.839 "is_configured": false, 00:09:14.839 "data_offset": 0, 00:09:14.839 "data_size": 65536 00:09:14.839 } 00:09:14.839 ] 00:09:14.839 }' 00:09:14.839 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.840 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.408 [2024-11-26 18:56:06.555109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.408 "name": "Existed_Raid", 00:09:15.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.408 "strip_size_kb": 64, 00:09:15.408 "state": "configuring", 00:09:15.408 "raid_level": "concat", 00:09:15.408 "superblock": false, 00:09:15.408 "num_base_bdevs": 3, 00:09:15.408 "num_base_bdevs_discovered": 2, 00:09:15.408 "num_base_bdevs_operational": 3, 00:09:15.408 "base_bdevs_list": [ 00:09:15.408 { 00:09:15.408 "name": "BaseBdev1", 00:09:15.408 "uuid": "0166e75a-460d-42ec-a8e9-e833d98c029b", 00:09:15.408 "is_configured": true, 00:09:15.408 "data_offset": 0, 00:09:15.408 "data_size": 65536 00:09:15.408 }, 00:09:15.408 { 00:09:15.408 "name": null, 00:09:15.408 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:15.408 "is_configured": false, 00:09:15.408 "data_offset": 0, 00:09:15.408 "data_size": 65536 00:09:15.408 }, 00:09:15.408 { 00:09:15.408 "name": "BaseBdev3", 00:09:15.408 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:15.408 "is_configured": true, 00:09:15.408 "data_offset": 0, 00:09:15.408 "data_size": 65536 00:09:15.408 } 00:09:15.408 ] 00:09:15.408 }' 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.408 18:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.976 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.976 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.977 [2024-11-26 18:56:07.135245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.977 "name": "Existed_Raid", 00:09:15.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.977 "strip_size_kb": 64, 00:09:15.977 "state": "configuring", 00:09:15.977 "raid_level": "concat", 00:09:15.977 "superblock": false, 00:09:15.977 "num_base_bdevs": 3, 00:09:15.977 "num_base_bdevs_discovered": 1, 00:09:15.977 "num_base_bdevs_operational": 3, 00:09:15.977 "base_bdevs_list": [ 00:09:15.977 { 00:09:15.977 "name": null, 00:09:15.977 "uuid": "0166e75a-460d-42ec-a8e9-e833d98c029b", 00:09:15.977 "is_configured": false, 00:09:15.977 "data_offset": 0, 00:09:15.977 "data_size": 65536 00:09:15.977 }, 00:09:15.977 { 00:09:15.977 "name": null, 00:09:15.977 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:15.977 "is_configured": false, 00:09:15.977 "data_offset": 0, 00:09:15.977 "data_size": 65536 00:09:15.977 }, 00:09:15.977 { 00:09:15.977 "name": "BaseBdev3", 00:09:15.977 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:15.977 "is_configured": true, 00:09:15.977 "data_offset": 0, 00:09:15.977 "data_size": 65536 00:09:15.977 } 00:09:15.977 ] 00:09:15.977 }' 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.977 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.545 [2024-11-26 18:56:07.777498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.545 "name": "Existed_Raid", 00:09:16.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.545 "strip_size_kb": 64, 00:09:16.545 "state": "configuring", 00:09:16.545 "raid_level": "concat", 00:09:16.545 "superblock": false, 00:09:16.545 "num_base_bdevs": 3, 00:09:16.545 "num_base_bdevs_discovered": 2, 00:09:16.545 "num_base_bdevs_operational": 3, 00:09:16.545 "base_bdevs_list": [ 00:09:16.545 { 00:09:16.545 "name": null, 00:09:16.545 "uuid": "0166e75a-460d-42ec-a8e9-e833d98c029b", 00:09:16.545 "is_configured": false, 00:09:16.545 "data_offset": 0, 00:09:16.545 "data_size": 65536 00:09:16.545 }, 00:09:16.545 { 00:09:16.545 "name": "BaseBdev2", 00:09:16.545 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:16.545 "is_configured": true, 00:09:16.545 "data_offset": 0, 00:09:16.545 "data_size": 65536 00:09:16.545 }, 00:09:16.545 { 00:09:16.545 "name": "BaseBdev3", 00:09:16.545 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:16.545 "is_configured": true, 00:09:16.545 "data_offset": 0, 00:09:16.545 "data_size": 65536 00:09:16.545 } 00:09:16.545 ] 00:09:16.545 }' 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.545 18:56:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0166e75a-460d-42ec-a8e9-e833d98c029b 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.113 [2024-11-26 18:56:08.455331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:17.113 [2024-11-26 18:56:08.455385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:17.113 [2024-11-26 18:56:08.455402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:17.113 [2024-11-26 18:56:08.455724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:17.113 [2024-11-26 18:56:08.455952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:17.113 [2024-11-26 18:56:08.455970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:17.113 [2024-11-26 18:56:08.456274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.113 NewBaseBdev 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.113 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.372 [ 00:09:17.372 { 00:09:17.372 "name": "NewBaseBdev", 00:09:17.372 "aliases": [ 00:09:17.372 "0166e75a-460d-42ec-a8e9-e833d98c029b" 00:09:17.372 ], 00:09:17.372 "product_name": "Malloc disk", 00:09:17.372 "block_size": 512, 00:09:17.372 "num_blocks": 65536, 00:09:17.372 "uuid": "0166e75a-460d-42ec-a8e9-e833d98c029b", 00:09:17.372 "assigned_rate_limits": { 00:09:17.372 "rw_ios_per_sec": 0, 00:09:17.372 "rw_mbytes_per_sec": 0, 00:09:17.372 "r_mbytes_per_sec": 0, 00:09:17.372 "w_mbytes_per_sec": 0 00:09:17.372 }, 00:09:17.372 "claimed": true, 00:09:17.372 "claim_type": "exclusive_write", 00:09:17.372 "zoned": false, 00:09:17.372 "supported_io_types": { 00:09:17.372 "read": true, 00:09:17.372 "write": true, 00:09:17.372 "unmap": true, 00:09:17.372 "flush": true, 00:09:17.372 "reset": true, 00:09:17.372 "nvme_admin": false, 00:09:17.372 "nvme_io": false, 00:09:17.372 "nvme_io_md": false, 00:09:17.372 "write_zeroes": true, 00:09:17.372 "zcopy": true, 00:09:17.372 "get_zone_info": false, 00:09:17.372 "zone_management": false, 00:09:17.372 "zone_append": false, 00:09:17.372 "compare": false, 00:09:17.372 "compare_and_write": false, 00:09:17.372 "abort": true, 00:09:17.372 "seek_hole": false, 00:09:17.372 "seek_data": false, 00:09:17.372 "copy": true, 00:09:17.372 "nvme_iov_md": false 00:09:17.372 }, 00:09:17.372 "memory_domains": [ 00:09:17.372 { 00:09:17.372 "dma_device_id": "system", 00:09:17.372 "dma_device_type": 1 00:09:17.372 }, 00:09:17.372 { 00:09:17.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.372 "dma_device_type": 2 00:09:17.372 } 00:09:17.372 ], 00:09:17.372 "driver_specific": {} 00:09:17.372 } 00:09:17.372 ] 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.372 "name": "Existed_Raid", 00:09:17.372 "uuid": "2c0876ec-a70b-4d46-9042-20c285276597", 00:09:17.372 "strip_size_kb": 64, 00:09:17.372 "state": "online", 00:09:17.372 "raid_level": "concat", 00:09:17.372 "superblock": false, 00:09:17.372 "num_base_bdevs": 3, 00:09:17.372 "num_base_bdevs_discovered": 3, 00:09:17.372 "num_base_bdevs_operational": 3, 00:09:17.372 "base_bdevs_list": [ 00:09:17.372 { 00:09:17.372 "name": "NewBaseBdev", 00:09:17.372 "uuid": "0166e75a-460d-42ec-a8e9-e833d98c029b", 00:09:17.372 "is_configured": true, 00:09:17.372 "data_offset": 0, 00:09:17.372 "data_size": 65536 00:09:17.372 }, 00:09:17.372 { 00:09:17.372 "name": "BaseBdev2", 00:09:17.372 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:17.372 "is_configured": true, 00:09:17.372 "data_offset": 0, 00:09:17.372 "data_size": 65536 00:09:17.372 }, 00:09:17.372 { 00:09:17.372 "name": "BaseBdev3", 00:09:17.372 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:17.372 "is_configured": true, 00:09:17.372 "data_offset": 0, 00:09:17.372 "data_size": 65536 00:09:17.372 } 00:09:17.372 ] 00:09:17.372 }' 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.372 18:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.943 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:17.943 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:17.943 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.943 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.943 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.943 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.943 18:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:17.943 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.943 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.943 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.943 [2024-11-26 18:56:09.007942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.943 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.943 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.943 "name": "Existed_Raid", 00:09:17.943 "aliases": [ 00:09:17.943 "2c0876ec-a70b-4d46-9042-20c285276597" 00:09:17.943 ], 00:09:17.943 "product_name": "Raid Volume", 00:09:17.943 "block_size": 512, 00:09:17.943 "num_blocks": 196608, 00:09:17.943 "uuid": "2c0876ec-a70b-4d46-9042-20c285276597", 00:09:17.943 "assigned_rate_limits": { 00:09:17.943 "rw_ios_per_sec": 0, 00:09:17.943 "rw_mbytes_per_sec": 0, 00:09:17.943 "r_mbytes_per_sec": 0, 00:09:17.943 "w_mbytes_per_sec": 0 00:09:17.943 }, 00:09:17.943 "claimed": false, 00:09:17.943 "zoned": false, 00:09:17.943 "supported_io_types": { 00:09:17.943 "read": true, 00:09:17.943 "write": true, 00:09:17.943 "unmap": true, 00:09:17.943 "flush": true, 00:09:17.943 "reset": true, 00:09:17.943 "nvme_admin": false, 00:09:17.943 "nvme_io": false, 00:09:17.943 "nvme_io_md": false, 00:09:17.943 "write_zeroes": true, 00:09:17.943 "zcopy": false, 00:09:17.943 "get_zone_info": false, 00:09:17.943 "zone_management": false, 00:09:17.943 "zone_append": false, 00:09:17.943 "compare": false, 00:09:17.943 "compare_and_write": false, 00:09:17.943 "abort": false, 00:09:17.943 "seek_hole": false, 00:09:17.943 "seek_data": false, 00:09:17.943 "copy": false, 00:09:17.943 "nvme_iov_md": false 00:09:17.943 }, 00:09:17.943 "memory_domains": [ 00:09:17.943 { 00:09:17.943 "dma_device_id": "system", 00:09:17.943 "dma_device_type": 1 00:09:17.943 }, 00:09:17.943 { 00:09:17.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.943 "dma_device_type": 2 00:09:17.943 }, 00:09:17.943 { 00:09:17.943 "dma_device_id": "system", 00:09:17.943 "dma_device_type": 1 00:09:17.943 }, 00:09:17.943 { 00:09:17.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.943 "dma_device_type": 2 00:09:17.943 }, 00:09:17.943 { 00:09:17.943 "dma_device_id": "system", 00:09:17.943 "dma_device_type": 1 00:09:17.943 }, 00:09:17.943 { 00:09:17.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.943 "dma_device_type": 2 00:09:17.943 } 00:09:17.943 ], 00:09:17.943 "driver_specific": { 00:09:17.943 "raid": { 00:09:17.943 "uuid": "2c0876ec-a70b-4d46-9042-20c285276597", 00:09:17.943 "strip_size_kb": 64, 00:09:17.943 "state": "online", 00:09:17.943 "raid_level": "concat", 00:09:17.943 "superblock": false, 00:09:17.943 "num_base_bdevs": 3, 00:09:17.943 "num_base_bdevs_discovered": 3, 00:09:17.943 "num_base_bdevs_operational": 3, 00:09:17.943 "base_bdevs_list": [ 00:09:17.943 { 00:09:17.943 "name": "NewBaseBdev", 00:09:17.943 "uuid": "0166e75a-460d-42ec-a8e9-e833d98c029b", 00:09:17.943 "is_configured": true, 00:09:17.943 "data_offset": 0, 00:09:17.943 "data_size": 65536 00:09:17.944 }, 00:09:17.944 { 00:09:17.944 "name": "BaseBdev2", 00:09:17.944 "uuid": "f3bda511-00f5-4bdb-9faf-395a8ce8fdde", 00:09:17.944 "is_configured": true, 00:09:17.944 "data_offset": 0, 00:09:17.944 "data_size": 65536 00:09:17.944 }, 00:09:17.944 { 00:09:17.944 "name": "BaseBdev3", 00:09:17.944 "uuid": "a568638d-c91c-46b7-b5c7-5e82a9bccca1", 00:09:17.944 "is_configured": true, 00:09:17.944 "data_offset": 0, 00:09:17.944 "data_size": 65536 00:09:17.944 } 00:09:17.944 ] 00:09:17.944 } 00:09:17.944 } 00:09:17.944 }' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:17.944 BaseBdev2 00:09:17.944 BaseBdev3' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.944 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.203 [2024-11-26 18:56:09.315623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.203 [2024-11-26 18:56:09.315660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.203 [2024-11-26 18:56:09.315767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.203 [2024-11-26 18:56:09.315844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.203 [2024-11-26 18:56:09.315865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65649 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65649 ']' 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65649 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65649 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65649' 00:09:18.203 killing process with pid 65649 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65649 00:09:18.203 [2024-11-26 18:56:09.353758] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:18.203 18:56:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65649 00:09:18.460 [2024-11-26 18:56:09.630448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.392 18:56:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:19.392 00:09:19.392 real 0m11.915s 00:09:19.392 user 0m19.752s 00:09:19.392 sys 0m1.605s 00:09:19.393 18:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.393 18:56:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.393 ************************************ 00:09:19.393 END TEST raid_state_function_test 00:09:19.393 ************************************ 00:09:19.393 18:56:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:19.393 18:56:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:19.393 18:56:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.393 18:56:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.652 ************************************ 00:09:19.652 START TEST raid_state_function_test_sb 00:09:19.652 ************************************ 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:19.652 Process raid pid: 66287 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66287 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66287' 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66287 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66287 ']' 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.652 18:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.652 [2024-11-26 18:56:10.916766] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:09:19.652 [2024-11-26 18:56:10.917200] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.911 [2024-11-26 18:56:11.106049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.911 [2024-11-26 18:56:11.241281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.170 [2024-11-26 18:56:11.454036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.170 [2024-11-26 18:56:11.454277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.760 [2024-11-26 18:56:11.962729] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.760 [2024-11-26 18:56:11.962797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.760 [2024-11-26 18:56:11.962815] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.760 [2024-11-26 18:56:11.962830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.760 [2024-11-26 18:56:11.962840] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.760 [2024-11-26 18:56:11.962854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.760 18:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.760 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.760 "name": "Existed_Raid", 00:09:20.760 "uuid": "be234e9a-1e37-4f4d-8975-d4477e80f2f9", 00:09:20.760 "strip_size_kb": 64, 00:09:20.760 "state": "configuring", 00:09:20.760 "raid_level": "concat", 00:09:20.760 "superblock": true, 00:09:20.760 "num_base_bdevs": 3, 00:09:20.760 "num_base_bdevs_discovered": 0, 00:09:20.761 "num_base_bdevs_operational": 3, 00:09:20.761 "base_bdevs_list": [ 00:09:20.761 { 00:09:20.761 "name": "BaseBdev1", 00:09:20.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.761 "is_configured": false, 00:09:20.761 "data_offset": 0, 00:09:20.761 "data_size": 0 00:09:20.761 }, 00:09:20.761 { 00:09:20.761 "name": "BaseBdev2", 00:09:20.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.761 "is_configured": false, 00:09:20.761 "data_offset": 0, 00:09:20.761 "data_size": 0 00:09:20.761 }, 00:09:20.761 { 00:09:20.761 "name": "BaseBdev3", 00:09:20.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.761 "is_configured": false, 00:09:20.761 "data_offset": 0, 00:09:20.761 "data_size": 0 00:09:20.761 } 00:09:20.761 ] 00:09:20.761 }' 00:09:20.761 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.761 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.350 [2024-11-26 18:56:12.482865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.350 [2024-11-26 18:56:12.483110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.350 [2024-11-26 18:56:12.490821] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.350 [2024-11-26 18:56:12.491044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.350 [2024-11-26 18:56:12.491174] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.350 [2024-11-26 18:56:12.491253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.350 [2024-11-26 18:56:12.491409] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.350 [2024-11-26 18:56:12.491469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.350 [2024-11-26 18:56:12.540131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.350 BaseBdev1 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.350 [ 00:09:21.350 { 00:09:21.350 "name": "BaseBdev1", 00:09:21.350 "aliases": [ 00:09:21.350 "883b9796-d1f2-454d-8c10-9c79b8af1d6b" 00:09:21.350 ], 00:09:21.350 "product_name": "Malloc disk", 00:09:21.350 "block_size": 512, 00:09:21.350 "num_blocks": 65536, 00:09:21.350 "uuid": "883b9796-d1f2-454d-8c10-9c79b8af1d6b", 00:09:21.350 "assigned_rate_limits": { 00:09:21.350 "rw_ios_per_sec": 0, 00:09:21.350 "rw_mbytes_per_sec": 0, 00:09:21.350 "r_mbytes_per_sec": 0, 00:09:21.350 "w_mbytes_per_sec": 0 00:09:21.350 }, 00:09:21.350 "claimed": true, 00:09:21.350 "claim_type": "exclusive_write", 00:09:21.350 "zoned": false, 00:09:21.350 "supported_io_types": { 00:09:21.350 "read": true, 00:09:21.350 "write": true, 00:09:21.350 "unmap": true, 00:09:21.350 "flush": true, 00:09:21.350 "reset": true, 00:09:21.350 "nvme_admin": false, 00:09:21.350 "nvme_io": false, 00:09:21.350 "nvme_io_md": false, 00:09:21.350 "write_zeroes": true, 00:09:21.350 "zcopy": true, 00:09:21.350 "get_zone_info": false, 00:09:21.350 "zone_management": false, 00:09:21.350 "zone_append": false, 00:09:21.350 "compare": false, 00:09:21.350 "compare_and_write": false, 00:09:21.350 "abort": true, 00:09:21.350 "seek_hole": false, 00:09:21.350 "seek_data": false, 00:09:21.350 "copy": true, 00:09:21.350 "nvme_iov_md": false 00:09:21.350 }, 00:09:21.350 "memory_domains": [ 00:09:21.350 { 00:09:21.350 "dma_device_id": "system", 00:09:21.350 "dma_device_type": 1 00:09:21.350 }, 00:09:21.350 { 00:09:21.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.350 "dma_device_type": 2 00:09:21.350 } 00:09:21.350 ], 00:09:21.350 "driver_specific": {} 00:09:21.350 } 00:09:21.350 ] 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.350 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.350 "name": "Existed_Raid", 00:09:21.350 "uuid": "86e1e578-ada4-4d75-81ab-1f8087cc43a1", 00:09:21.350 "strip_size_kb": 64, 00:09:21.350 "state": "configuring", 00:09:21.350 "raid_level": "concat", 00:09:21.350 "superblock": true, 00:09:21.350 "num_base_bdevs": 3, 00:09:21.350 "num_base_bdevs_discovered": 1, 00:09:21.350 "num_base_bdevs_operational": 3, 00:09:21.350 "base_bdevs_list": [ 00:09:21.350 { 00:09:21.350 "name": "BaseBdev1", 00:09:21.350 "uuid": "883b9796-d1f2-454d-8c10-9c79b8af1d6b", 00:09:21.351 "is_configured": true, 00:09:21.351 "data_offset": 2048, 00:09:21.351 "data_size": 63488 00:09:21.351 }, 00:09:21.351 { 00:09:21.351 "name": "BaseBdev2", 00:09:21.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.351 "is_configured": false, 00:09:21.351 "data_offset": 0, 00:09:21.351 "data_size": 0 00:09:21.351 }, 00:09:21.351 { 00:09:21.351 "name": "BaseBdev3", 00:09:21.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.351 "is_configured": false, 00:09:21.351 "data_offset": 0, 00:09:21.351 "data_size": 0 00:09:21.351 } 00:09:21.351 ] 00:09:21.351 }' 00:09:21.351 18:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.351 18:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.940 [2024-11-26 18:56:13.092359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.940 [2024-11-26 18:56:13.092425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.940 [2024-11-26 18:56:13.104420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.940 [2024-11-26 18:56:13.107078] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.940 [2024-11-26 18:56:13.107283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.940 [2024-11-26 18:56:13.107424] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.940 [2024-11-26 18:56:13.107557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.940 "name": "Existed_Raid", 00:09:21.940 "uuid": "aee1c627-660d-4323-87c6-318309d4103f", 00:09:21.940 "strip_size_kb": 64, 00:09:21.940 "state": "configuring", 00:09:21.940 "raid_level": "concat", 00:09:21.940 "superblock": true, 00:09:21.940 "num_base_bdevs": 3, 00:09:21.940 "num_base_bdevs_discovered": 1, 00:09:21.940 "num_base_bdevs_operational": 3, 00:09:21.940 "base_bdevs_list": [ 00:09:21.940 { 00:09:21.940 "name": "BaseBdev1", 00:09:21.940 "uuid": "883b9796-d1f2-454d-8c10-9c79b8af1d6b", 00:09:21.940 "is_configured": true, 00:09:21.940 "data_offset": 2048, 00:09:21.940 "data_size": 63488 00:09:21.940 }, 00:09:21.940 { 00:09:21.940 "name": "BaseBdev2", 00:09:21.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.940 "is_configured": false, 00:09:21.940 "data_offset": 0, 00:09:21.940 "data_size": 0 00:09:21.940 }, 00:09:21.940 { 00:09:21.940 "name": "BaseBdev3", 00:09:21.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.940 "is_configured": false, 00:09:21.940 "data_offset": 0, 00:09:21.940 "data_size": 0 00:09:21.940 } 00:09:21.940 ] 00:09:21.940 }' 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.940 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.508 [2024-11-26 18:56:13.636251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.508 BaseBdev2 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.508 [ 00:09:22.508 { 00:09:22.508 "name": "BaseBdev2", 00:09:22.508 "aliases": [ 00:09:22.508 "14fc58f2-8d2b-485c-b8d9-76574d8426c5" 00:09:22.508 ], 00:09:22.508 "product_name": "Malloc disk", 00:09:22.508 "block_size": 512, 00:09:22.508 "num_blocks": 65536, 00:09:22.508 "uuid": "14fc58f2-8d2b-485c-b8d9-76574d8426c5", 00:09:22.508 "assigned_rate_limits": { 00:09:22.508 "rw_ios_per_sec": 0, 00:09:22.508 "rw_mbytes_per_sec": 0, 00:09:22.508 "r_mbytes_per_sec": 0, 00:09:22.508 "w_mbytes_per_sec": 0 00:09:22.508 }, 00:09:22.508 "claimed": true, 00:09:22.508 "claim_type": "exclusive_write", 00:09:22.508 "zoned": false, 00:09:22.508 "supported_io_types": { 00:09:22.508 "read": true, 00:09:22.508 "write": true, 00:09:22.508 "unmap": true, 00:09:22.508 "flush": true, 00:09:22.508 "reset": true, 00:09:22.508 "nvme_admin": false, 00:09:22.508 "nvme_io": false, 00:09:22.508 "nvme_io_md": false, 00:09:22.508 "write_zeroes": true, 00:09:22.508 "zcopy": true, 00:09:22.508 "get_zone_info": false, 00:09:22.508 "zone_management": false, 00:09:22.508 "zone_append": false, 00:09:22.508 "compare": false, 00:09:22.508 "compare_and_write": false, 00:09:22.508 "abort": true, 00:09:22.508 "seek_hole": false, 00:09:22.508 "seek_data": false, 00:09:22.508 "copy": true, 00:09:22.508 "nvme_iov_md": false 00:09:22.508 }, 00:09:22.508 "memory_domains": [ 00:09:22.508 { 00:09:22.508 "dma_device_id": "system", 00:09:22.508 "dma_device_type": 1 00:09:22.508 }, 00:09:22.508 { 00:09:22.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.508 "dma_device_type": 2 00:09:22.508 } 00:09:22.508 ], 00:09:22.508 "driver_specific": {} 00:09:22.508 } 00:09:22.508 ] 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.508 "name": "Existed_Raid", 00:09:22.508 "uuid": "aee1c627-660d-4323-87c6-318309d4103f", 00:09:22.508 "strip_size_kb": 64, 00:09:22.508 "state": "configuring", 00:09:22.508 "raid_level": "concat", 00:09:22.508 "superblock": true, 00:09:22.508 "num_base_bdevs": 3, 00:09:22.508 "num_base_bdevs_discovered": 2, 00:09:22.508 "num_base_bdevs_operational": 3, 00:09:22.508 "base_bdevs_list": [ 00:09:22.508 { 00:09:22.508 "name": "BaseBdev1", 00:09:22.508 "uuid": "883b9796-d1f2-454d-8c10-9c79b8af1d6b", 00:09:22.508 "is_configured": true, 00:09:22.508 "data_offset": 2048, 00:09:22.508 "data_size": 63488 00:09:22.508 }, 00:09:22.508 { 00:09:22.508 "name": "BaseBdev2", 00:09:22.508 "uuid": "14fc58f2-8d2b-485c-b8d9-76574d8426c5", 00:09:22.508 "is_configured": true, 00:09:22.508 "data_offset": 2048, 00:09:22.508 "data_size": 63488 00:09:22.508 }, 00:09:22.508 { 00:09:22.508 "name": "BaseBdev3", 00:09:22.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.508 "is_configured": false, 00:09:22.508 "data_offset": 0, 00:09:22.508 "data_size": 0 00:09:22.508 } 00:09:22.508 ] 00:09:22.508 }' 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.508 18:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.076 [2024-11-26 18:56:14.183755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.076 [2024-11-26 18:56:14.184099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.076 [2024-11-26 18:56:14.184128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:23.076 BaseBdev3 00:09:23.076 [2024-11-26 18:56:14.184606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:23.076 [2024-11-26 18:56:14.184988] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.076 [2024-11-26 18:56:14.185126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:23.076 [2024-11-26 18:56:14.185565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.076 [ 00:09:23.076 { 00:09:23.076 "name": "BaseBdev3", 00:09:23.076 "aliases": [ 00:09:23.076 "b5493ea9-81fe-4207-973a-c326d65524cc" 00:09:23.076 ], 00:09:23.076 "product_name": "Malloc disk", 00:09:23.076 "block_size": 512, 00:09:23.076 "num_blocks": 65536, 00:09:23.076 "uuid": "b5493ea9-81fe-4207-973a-c326d65524cc", 00:09:23.076 "assigned_rate_limits": { 00:09:23.076 "rw_ios_per_sec": 0, 00:09:23.076 "rw_mbytes_per_sec": 0, 00:09:23.076 "r_mbytes_per_sec": 0, 00:09:23.076 "w_mbytes_per_sec": 0 00:09:23.076 }, 00:09:23.076 "claimed": true, 00:09:23.076 "claim_type": "exclusive_write", 00:09:23.076 "zoned": false, 00:09:23.076 "supported_io_types": { 00:09:23.076 "read": true, 00:09:23.076 "write": true, 00:09:23.076 "unmap": true, 00:09:23.076 "flush": true, 00:09:23.076 "reset": true, 00:09:23.076 "nvme_admin": false, 00:09:23.076 "nvme_io": false, 00:09:23.076 "nvme_io_md": false, 00:09:23.076 "write_zeroes": true, 00:09:23.076 "zcopy": true, 00:09:23.076 "get_zone_info": false, 00:09:23.076 "zone_management": false, 00:09:23.076 "zone_append": false, 00:09:23.076 "compare": false, 00:09:23.076 "compare_and_write": false, 00:09:23.076 "abort": true, 00:09:23.076 "seek_hole": false, 00:09:23.076 "seek_data": false, 00:09:23.076 "copy": true, 00:09:23.076 "nvme_iov_md": false 00:09:23.076 }, 00:09:23.076 "memory_domains": [ 00:09:23.076 { 00:09:23.076 "dma_device_id": "system", 00:09:23.076 "dma_device_type": 1 00:09:23.076 }, 00:09:23.076 { 00:09:23.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.076 "dma_device_type": 2 00:09:23.076 } 00:09:23.076 ], 00:09:23.076 "driver_specific": {} 00:09:23.076 } 00:09:23.076 ] 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.076 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.077 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.077 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.077 "name": "Existed_Raid", 00:09:23.077 "uuid": "aee1c627-660d-4323-87c6-318309d4103f", 00:09:23.077 "strip_size_kb": 64, 00:09:23.077 "state": "online", 00:09:23.077 "raid_level": "concat", 00:09:23.077 "superblock": true, 00:09:23.077 "num_base_bdevs": 3, 00:09:23.077 "num_base_bdevs_discovered": 3, 00:09:23.077 "num_base_bdevs_operational": 3, 00:09:23.077 "base_bdevs_list": [ 00:09:23.077 { 00:09:23.077 "name": "BaseBdev1", 00:09:23.077 "uuid": "883b9796-d1f2-454d-8c10-9c79b8af1d6b", 00:09:23.077 "is_configured": true, 00:09:23.077 "data_offset": 2048, 00:09:23.077 "data_size": 63488 00:09:23.077 }, 00:09:23.077 { 00:09:23.077 "name": "BaseBdev2", 00:09:23.077 "uuid": "14fc58f2-8d2b-485c-b8d9-76574d8426c5", 00:09:23.077 "is_configured": true, 00:09:23.077 "data_offset": 2048, 00:09:23.077 "data_size": 63488 00:09:23.077 }, 00:09:23.077 { 00:09:23.077 "name": "BaseBdev3", 00:09:23.077 "uuid": "b5493ea9-81fe-4207-973a-c326d65524cc", 00:09:23.077 "is_configured": true, 00:09:23.077 "data_offset": 2048, 00:09:23.077 "data_size": 63488 00:09:23.077 } 00:09:23.077 ] 00:09:23.077 }' 00:09:23.077 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.077 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.644 [2024-11-26 18:56:14.756410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.644 "name": "Existed_Raid", 00:09:23.644 "aliases": [ 00:09:23.644 "aee1c627-660d-4323-87c6-318309d4103f" 00:09:23.644 ], 00:09:23.644 "product_name": "Raid Volume", 00:09:23.644 "block_size": 512, 00:09:23.644 "num_blocks": 190464, 00:09:23.644 "uuid": "aee1c627-660d-4323-87c6-318309d4103f", 00:09:23.644 "assigned_rate_limits": { 00:09:23.644 "rw_ios_per_sec": 0, 00:09:23.644 "rw_mbytes_per_sec": 0, 00:09:23.644 "r_mbytes_per_sec": 0, 00:09:23.644 "w_mbytes_per_sec": 0 00:09:23.644 }, 00:09:23.644 "claimed": false, 00:09:23.644 "zoned": false, 00:09:23.644 "supported_io_types": { 00:09:23.644 "read": true, 00:09:23.644 "write": true, 00:09:23.644 "unmap": true, 00:09:23.644 "flush": true, 00:09:23.644 "reset": true, 00:09:23.644 "nvme_admin": false, 00:09:23.644 "nvme_io": false, 00:09:23.644 "nvme_io_md": false, 00:09:23.644 "write_zeroes": true, 00:09:23.644 "zcopy": false, 00:09:23.644 "get_zone_info": false, 00:09:23.644 "zone_management": false, 00:09:23.644 "zone_append": false, 00:09:23.644 "compare": false, 00:09:23.644 "compare_and_write": false, 00:09:23.644 "abort": false, 00:09:23.644 "seek_hole": false, 00:09:23.644 "seek_data": false, 00:09:23.644 "copy": false, 00:09:23.644 "nvme_iov_md": false 00:09:23.644 }, 00:09:23.644 "memory_domains": [ 00:09:23.644 { 00:09:23.644 "dma_device_id": "system", 00:09:23.644 "dma_device_type": 1 00:09:23.644 }, 00:09:23.644 { 00:09:23.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.644 "dma_device_type": 2 00:09:23.644 }, 00:09:23.644 { 00:09:23.644 "dma_device_id": "system", 00:09:23.644 "dma_device_type": 1 00:09:23.644 }, 00:09:23.644 { 00:09:23.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.644 "dma_device_type": 2 00:09:23.644 }, 00:09:23.644 { 00:09:23.644 "dma_device_id": "system", 00:09:23.644 "dma_device_type": 1 00:09:23.644 }, 00:09:23.644 { 00:09:23.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.644 "dma_device_type": 2 00:09:23.644 } 00:09:23.644 ], 00:09:23.644 "driver_specific": { 00:09:23.644 "raid": { 00:09:23.644 "uuid": "aee1c627-660d-4323-87c6-318309d4103f", 00:09:23.644 "strip_size_kb": 64, 00:09:23.644 "state": "online", 00:09:23.644 "raid_level": "concat", 00:09:23.644 "superblock": true, 00:09:23.644 "num_base_bdevs": 3, 00:09:23.644 "num_base_bdevs_discovered": 3, 00:09:23.644 "num_base_bdevs_operational": 3, 00:09:23.644 "base_bdevs_list": [ 00:09:23.644 { 00:09:23.644 "name": "BaseBdev1", 00:09:23.644 "uuid": "883b9796-d1f2-454d-8c10-9c79b8af1d6b", 00:09:23.644 "is_configured": true, 00:09:23.644 "data_offset": 2048, 00:09:23.644 "data_size": 63488 00:09:23.644 }, 00:09:23.644 { 00:09:23.644 "name": "BaseBdev2", 00:09:23.644 "uuid": "14fc58f2-8d2b-485c-b8d9-76574d8426c5", 00:09:23.644 "is_configured": true, 00:09:23.644 "data_offset": 2048, 00:09:23.644 "data_size": 63488 00:09:23.644 }, 00:09:23.644 { 00:09:23.644 "name": "BaseBdev3", 00:09:23.644 "uuid": "b5493ea9-81fe-4207-973a-c326d65524cc", 00:09:23.644 "is_configured": true, 00:09:23.644 "data_offset": 2048, 00:09:23.644 "data_size": 63488 00:09:23.644 } 00:09:23.644 ] 00:09:23.644 } 00:09:23.644 } 00:09:23.644 }' 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:23.644 BaseBdev2 00:09:23.644 BaseBdev3' 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.644 18:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.903 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.903 [2024-11-26 18:56:15.068156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.903 [2024-11-26 18:56:15.068191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.904 [2024-11-26 18:56:15.068281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.904 "name": "Existed_Raid", 00:09:23.904 "uuid": "aee1c627-660d-4323-87c6-318309d4103f", 00:09:23.904 "strip_size_kb": 64, 00:09:23.904 "state": "offline", 00:09:23.904 "raid_level": "concat", 00:09:23.904 "superblock": true, 00:09:23.904 "num_base_bdevs": 3, 00:09:23.904 "num_base_bdevs_discovered": 2, 00:09:23.904 "num_base_bdevs_operational": 2, 00:09:23.904 "base_bdevs_list": [ 00:09:23.904 { 00:09:23.904 "name": null, 00:09:23.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.904 "is_configured": false, 00:09:23.904 "data_offset": 0, 00:09:23.904 "data_size": 63488 00:09:23.904 }, 00:09:23.904 { 00:09:23.904 "name": "BaseBdev2", 00:09:23.904 "uuid": "14fc58f2-8d2b-485c-b8d9-76574d8426c5", 00:09:23.904 "is_configured": true, 00:09:23.904 "data_offset": 2048, 00:09:23.904 "data_size": 63488 00:09:23.904 }, 00:09:23.904 { 00:09:23.904 "name": "BaseBdev3", 00:09:23.904 "uuid": "b5493ea9-81fe-4207-973a-c326d65524cc", 00:09:23.904 "is_configured": true, 00:09:23.904 "data_offset": 2048, 00:09:23.904 "data_size": 63488 00:09:23.904 } 00:09:23.904 ] 00:09:23.904 }' 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.904 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.472 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.472 [2024-11-26 18:56:15.756061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 18:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 [2024-11-26 18:56:15.920884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.809 [2024-11-26 18:56:15.920974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 BaseBdev2 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 [ 00:09:24.809 { 00:09:24.809 "name": "BaseBdev2", 00:09:24.809 "aliases": [ 00:09:24.809 "07a3d620-1e51-4eb8-847d-9196ddddeeee" 00:09:24.809 ], 00:09:24.809 "product_name": "Malloc disk", 00:09:24.809 "block_size": 512, 00:09:24.809 "num_blocks": 65536, 00:09:24.809 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:24.809 "assigned_rate_limits": { 00:09:24.809 "rw_ios_per_sec": 0, 00:09:24.809 "rw_mbytes_per_sec": 0, 00:09:24.809 "r_mbytes_per_sec": 0, 00:09:24.809 "w_mbytes_per_sec": 0 00:09:24.809 }, 00:09:24.809 "claimed": false, 00:09:24.809 "zoned": false, 00:09:24.809 "supported_io_types": { 00:09:24.809 "read": true, 00:09:24.809 "write": true, 00:09:24.809 "unmap": true, 00:09:24.809 "flush": true, 00:09:24.809 "reset": true, 00:09:24.809 "nvme_admin": false, 00:09:24.809 "nvme_io": false, 00:09:24.809 "nvme_io_md": false, 00:09:24.809 "write_zeroes": true, 00:09:24.809 "zcopy": true, 00:09:24.809 "get_zone_info": false, 00:09:24.809 "zone_management": false, 00:09:24.809 "zone_append": false, 00:09:24.809 "compare": false, 00:09:24.809 "compare_and_write": false, 00:09:24.809 "abort": true, 00:09:24.809 "seek_hole": false, 00:09:24.809 "seek_data": false, 00:09:24.809 "copy": true, 00:09:24.809 "nvme_iov_md": false 00:09:24.809 }, 00:09:24.809 "memory_domains": [ 00:09:24.809 { 00:09:24.809 "dma_device_id": "system", 00:09:24.809 "dma_device_type": 1 00:09:24.809 }, 00:09:24.809 { 00:09:24.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.809 "dma_device_type": 2 00:09:24.809 } 00:09:24.809 ], 00:09:24.809 "driver_specific": {} 00:09:24.809 } 00:09:24.809 ] 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.068 BaseBdev3 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.068 [ 00:09:25.068 { 00:09:25.068 "name": "BaseBdev3", 00:09:25.068 "aliases": [ 00:09:25.068 "5f86be55-cd70-4eca-a040-8405a8a5d010" 00:09:25.068 ], 00:09:25.068 "product_name": "Malloc disk", 00:09:25.068 "block_size": 512, 00:09:25.068 "num_blocks": 65536, 00:09:25.068 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:25.068 "assigned_rate_limits": { 00:09:25.068 "rw_ios_per_sec": 0, 00:09:25.068 "rw_mbytes_per_sec": 0, 00:09:25.068 "r_mbytes_per_sec": 0, 00:09:25.068 "w_mbytes_per_sec": 0 00:09:25.068 }, 00:09:25.068 "claimed": false, 00:09:25.068 "zoned": false, 00:09:25.068 "supported_io_types": { 00:09:25.068 "read": true, 00:09:25.068 "write": true, 00:09:25.068 "unmap": true, 00:09:25.068 "flush": true, 00:09:25.068 "reset": true, 00:09:25.068 "nvme_admin": false, 00:09:25.068 "nvme_io": false, 00:09:25.068 "nvme_io_md": false, 00:09:25.068 "write_zeroes": true, 00:09:25.068 "zcopy": true, 00:09:25.068 "get_zone_info": false, 00:09:25.068 "zone_management": false, 00:09:25.068 "zone_append": false, 00:09:25.068 "compare": false, 00:09:25.068 "compare_and_write": false, 00:09:25.068 "abort": true, 00:09:25.068 "seek_hole": false, 00:09:25.068 "seek_data": false, 00:09:25.068 "copy": true, 00:09:25.068 "nvme_iov_md": false 00:09:25.068 }, 00:09:25.068 "memory_domains": [ 00:09:25.068 { 00:09:25.068 "dma_device_id": "system", 00:09:25.068 "dma_device_type": 1 00:09:25.068 }, 00:09:25.068 { 00:09:25.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.068 "dma_device_type": 2 00:09:25.068 } 00:09:25.068 ], 00:09:25.068 "driver_specific": {} 00:09:25.068 } 00:09:25.068 ] 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.068 [2024-11-26 18:56:16.238612] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.068 [2024-11-26 18:56:16.238819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.068 [2024-11-26 18:56:16.239016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.068 [2024-11-26 18:56:16.241664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.068 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.069 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.069 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.069 "name": "Existed_Raid", 00:09:25.069 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:25.069 "strip_size_kb": 64, 00:09:25.069 "state": "configuring", 00:09:25.069 "raid_level": "concat", 00:09:25.069 "superblock": true, 00:09:25.069 "num_base_bdevs": 3, 00:09:25.069 "num_base_bdevs_discovered": 2, 00:09:25.069 "num_base_bdevs_operational": 3, 00:09:25.069 "base_bdevs_list": [ 00:09:25.069 { 00:09:25.069 "name": "BaseBdev1", 00:09:25.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.069 "is_configured": false, 00:09:25.069 "data_offset": 0, 00:09:25.069 "data_size": 0 00:09:25.069 }, 00:09:25.069 { 00:09:25.069 "name": "BaseBdev2", 00:09:25.069 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:25.069 "is_configured": true, 00:09:25.069 "data_offset": 2048, 00:09:25.069 "data_size": 63488 00:09:25.069 }, 00:09:25.069 { 00:09:25.069 "name": "BaseBdev3", 00:09:25.069 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:25.069 "is_configured": true, 00:09:25.069 "data_offset": 2048, 00:09:25.069 "data_size": 63488 00:09:25.069 } 00:09:25.069 ] 00:09:25.069 }' 00:09:25.069 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.069 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.637 [2024-11-26 18:56:16.762846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.637 "name": "Existed_Raid", 00:09:25.637 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:25.637 "strip_size_kb": 64, 00:09:25.637 "state": "configuring", 00:09:25.637 "raid_level": "concat", 00:09:25.637 "superblock": true, 00:09:25.637 "num_base_bdevs": 3, 00:09:25.637 "num_base_bdevs_discovered": 1, 00:09:25.637 "num_base_bdevs_operational": 3, 00:09:25.637 "base_bdevs_list": [ 00:09:25.637 { 00:09:25.637 "name": "BaseBdev1", 00:09:25.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.637 "is_configured": false, 00:09:25.637 "data_offset": 0, 00:09:25.637 "data_size": 0 00:09:25.637 }, 00:09:25.637 { 00:09:25.637 "name": null, 00:09:25.637 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:25.637 "is_configured": false, 00:09:25.637 "data_offset": 0, 00:09:25.637 "data_size": 63488 00:09:25.637 }, 00:09:25.637 { 00:09:25.637 "name": "BaseBdev3", 00:09:25.637 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:25.637 "is_configured": true, 00:09:25.637 "data_offset": 2048, 00:09:25.637 "data_size": 63488 00:09:25.637 } 00:09:25.637 ] 00:09:25.637 }' 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.637 18:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.203 [2024-11-26 18:56:17.418320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.203 BaseBdev1 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.203 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.203 [ 00:09:26.203 { 00:09:26.203 "name": "BaseBdev1", 00:09:26.203 "aliases": [ 00:09:26.203 "f08bb126-2114-42f5-99f4-42de29e9dff5" 00:09:26.203 ], 00:09:26.203 "product_name": "Malloc disk", 00:09:26.203 "block_size": 512, 00:09:26.203 "num_blocks": 65536, 00:09:26.203 "uuid": "f08bb126-2114-42f5-99f4-42de29e9dff5", 00:09:26.203 "assigned_rate_limits": { 00:09:26.204 "rw_ios_per_sec": 0, 00:09:26.204 "rw_mbytes_per_sec": 0, 00:09:26.204 "r_mbytes_per_sec": 0, 00:09:26.204 "w_mbytes_per_sec": 0 00:09:26.204 }, 00:09:26.204 "claimed": true, 00:09:26.204 "claim_type": "exclusive_write", 00:09:26.204 "zoned": false, 00:09:26.204 "supported_io_types": { 00:09:26.204 "read": true, 00:09:26.204 "write": true, 00:09:26.204 "unmap": true, 00:09:26.204 "flush": true, 00:09:26.204 "reset": true, 00:09:26.204 "nvme_admin": false, 00:09:26.204 "nvme_io": false, 00:09:26.204 "nvme_io_md": false, 00:09:26.204 "write_zeroes": true, 00:09:26.204 "zcopy": true, 00:09:26.204 "get_zone_info": false, 00:09:26.204 "zone_management": false, 00:09:26.204 "zone_append": false, 00:09:26.204 "compare": false, 00:09:26.204 "compare_and_write": false, 00:09:26.204 "abort": true, 00:09:26.204 "seek_hole": false, 00:09:26.204 "seek_data": false, 00:09:26.204 "copy": true, 00:09:26.204 "nvme_iov_md": false 00:09:26.204 }, 00:09:26.204 "memory_domains": [ 00:09:26.204 { 00:09:26.204 "dma_device_id": "system", 00:09:26.204 "dma_device_type": 1 00:09:26.204 }, 00:09:26.204 { 00:09:26.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.204 "dma_device_type": 2 00:09:26.204 } 00:09:26.204 ], 00:09:26.204 "driver_specific": {} 00:09:26.204 } 00:09:26.204 ] 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.204 "name": "Existed_Raid", 00:09:26.204 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:26.204 "strip_size_kb": 64, 00:09:26.204 "state": "configuring", 00:09:26.204 "raid_level": "concat", 00:09:26.204 "superblock": true, 00:09:26.204 "num_base_bdevs": 3, 00:09:26.204 "num_base_bdevs_discovered": 2, 00:09:26.204 "num_base_bdevs_operational": 3, 00:09:26.204 "base_bdevs_list": [ 00:09:26.204 { 00:09:26.204 "name": "BaseBdev1", 00:09:26.204 "uuid": "f08bb126-2114-42f5-99f4-42de29e9dff5", 00:09:26.204 "is_configured": true, 00:09:26.204 "data_offset": 2048, 00:09:26.204 "data_size": 63488 00:09:26.204 }, 00:09:26.204 { 00:09:26.204 "name": null, 00:09:26.204 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:26.204 "is_configured": false, 00:09:26.204 "data_offset": 0, 00:09:26.204 "data_size": 63488 00:09:26.204 }, 00:09:26.204 { 00:09:26.204 "name": "BaseBdev3", 00:09:26.204 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:26.204 "is_configured": true, 00:09:26.204 "data_offset": 2048, 00:09:26.204 "data_size": 63488 00:09:26.204 } 00:09:26.204 ] 00:09:26.204 }' 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.204 18:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.770 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.770 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.770 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.770 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.770 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.770 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:26.770 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.771 [2024-11-26 18:56:18.058601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.771 "name": "Existed_Raid", 00:09:26.771 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:26.771 "strip_size_kb": 64, 00:09:26.771 "state": "configuring", 00:09:26.771 "raid_level": "concat", 00:09:26.771 "superblock": true, 00:09:26.771 "num_base_bdevs": 3, 00:09:26.771 "num_base_bdevs_discovered": 1, 00:09:26.771 "num_base_bdevs_operational": 3, 00:09:26.771 "base_bdevs_list": [ 00:09:26.771 { 00:09:26.771 "name": "BaseBdev1", 00:09:26.771 "uuid": "f08bb126-2114-42f5-99f4-42de29e9dff5", 00:09:26.771 "is_configured": true, 00:09:26.771 "data_offset": 2048, 00:09:26.771 "data_size": 63488 00:09:26.771 }, 00:09:26.771 { 00:09:26.771 "name": null, 00:09:26.771 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:26.771 "is_configured": false, 00:09:26.771 "data_offset": 0, 00:09:26.771 "data_size": 63488 00:09:26.771 }, 00:09:26.771 { 00:09:26.771 "name": null, 00:09:26.771 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:26.771 "is_configured": false, 00:09:26.771 "data_offset": 0, 00:09:26.771 "data_size": 63488 00:09:26.771 } 00:09:26.771 ] 00:09:26.771 }' 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.771 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.340 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.340 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.340 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.340 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.340 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.598 [2024-11-26 18:56:18.710901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.598 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.599 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.599 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.599 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.599 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.599 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.599 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.599 "name": "Existed_Raid", 00:09:27.599 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:27.599 "strip_size_kb": 64, 00:09:27.599 "state": "configuring", 00:09:27.599 "raid_level": "concat", 00:09:27.599 "superblock": true, 00:09:27.599 "num_base_bdevs": 3, 00:09:27.599 "num_base_bdevs_discovered": 2, 00:09:27.599 "num_base_bdevs_operational": 3, 00:09:27.599 "base_bdevs_list": [ 00:09:27.599 { 00:09:27.599 "name": "BaseBdev1", 00:09:27.599 "uuid": "f08bb126-2114-42f5-99f4-42de29e9dff5", 00:09:27.599 "is_configured": true, 00:09:27.599 "data_offset": 2048, 00:09:27.599 "data_size": 63488 00:09:27.599 }, 00:09:27.599 { 00:09:27.599 "name": null, 00:09:27.599 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:27.599 "is_configured": false, 00:09:27.599 "data_offset": 0, 00:09:27.599 "data_size": 63488 00:09:27.599 }, 00:09:27.599 { 00:09:27.599 "name": "BaseBdev3", 00:09:27.599 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:27.599 "is_configured": true, 00:09:27.599 "data_offset": 2048, 00:09:27.599 "data_size": 63488 00:09:27.599 } 00:09:27.599 ] 00:09:27.599 }' 00:09:27.599 18:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.599 18:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.168 [2024-11-26 18:56:19.323119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.168 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.169 "name": "Existed_Raid", 00:09:28.169 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:28.169 "strip_size_kb": 64, 00:09:28.169 "state": "configuring", 00:09:28.169 "raid_level": "concat", 00:09:28.169 "superblock": true, 00:09:28.169 "num_base_bdevs": 3, 00:09:28.169 "num_base_bdevs_discovered": 1, 00:09:28.169 "num_base_bdevs_operational": 3, 00:09:28.169 "base_bdevs_list": [ 00:09:28.169 { 00:09:28.169 "name": null, 00:09:28.169 "uuid": "f08bb126-2114-42f5-99f4-42de29e9dff5", 00:09:28.169 "is_configured": false, 00:09:28.169 "data_offset": 0, 00:09:28.169 "data_size": 63488 00:09:28.169 }, 00:09:28.169 { 00:09:28.169 "name": null, 00:09:28.169 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:28.169 "is_configured": false, 00:09:28.169 "data_offset": 0, 00:09:28.169 "data_size": 63488 00:09:28.169 }, 00:09:28.169 { 00:09:28.169 "name": "BaseBdev3", 00:09:28.169 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:28.169 "is_configured": true, 00:09:28.169 "data_offset": 2048, 00:09:28.169 "data_size": 63488 00:09:28.169 } 00:09:28.169 ] 00:09:28.169 }' 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.169 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.746 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.746 18:56:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.746 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.746 18:56:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.746 [2024-11-26 18:56:20.031898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.746 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.746 "name": "Existed_Raid", 00:09:28.746 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:28.746 "strip_size_kb": 64, 00:09:28.746 "state": "configuring", 00:09:28.746 "raid_level": "concat", 00:09:28.746 "superblock": true, 00:09:28.746 "num_base_bdevs": 3, 00:09:28.746 "num_base_bdevs_discovered": 2, 00:09:28.746 "num_base_bdevs_operational": 3, 00:09:28.746 "base_bdevs_list": [ 00:09:28.746 { 00:09:28.746 "name": null, 00:09:28.746 "uuid": "f08bb126-2114-42f5-99f4-42de29e9dff5", 00:09:28.746 "is_configured": false, 00:09:28.746 "data_offset": 0, 00:09:28.746 "data_size": 63488 00:09:28.746 }, 00:09:28.746 { 00:09:28.746 "name": "BaseBdev2", 00:09:28.747 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:28.747 "is_configured": true, 00:09:28.747 "data_offset": 2048, 00:09:28.747 "data_size": 63488 00:09:28.747 }, 00:09:28.747 { 00:09:28.747 "name": "BaseBdev3", 00:09:28.747 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:28.747 "is_configured": true, 00:09:28.747 "data_offset": 2048, 00:09:28.747 "data_size": 63488 00:09:28.747 } 00:09:28.747 ] 00:09:28.747 }' 00:09:28.747 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.747 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.314 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.574 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f08bb126-2114-42f5-99f4-42de29e9dff5 00:09:29.574 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.574 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.574 [2024-11-26 18:56:20.722890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:29.575 [2024-11-26 18:56:20.723245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:29.575 [2024-11-26 18:56:20.723271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:29.575 NewBaseBdev 00:09:29.575 [2024-11-26 18:56:20.723580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:29.575 [2024-11-26 18:56:20.723789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:29.575 [2024-11-26 18:56:20.723806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:29.575 [2024-11-26 18:56:20.723999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.575 [ 00:09:29.575 { 00:09:29.575 "name": "NewBaseBdev", 00:09:29.575 "aliases": [ 00:09:29.575 "f08bb126-2114-42f5-99f4-42de29e9dff5" 00:09:29.575 ], 00:09:29.575 "product_name": "Malloc disk", 00:09:29.575 "block_size": 512, 00:09:29.575 "num_blocks": 65536, 00:09:29.575 "uuid": "f08bb126-2114-42f5-99f4-42de29e9dff5", 00:09:29.575 "assigned_rate_limits": { 00:09:29.575 "rw_ios_per_sec": 0, 00:09:29.575 "rw_mbytes_per_sec": 0, 00:09:29.575 "r_mbytes_per_sec": 0, 00:09:29.575 "w_mbytes_per_sec": 0 00:09:29.575 }, 00:09:29.575 "claimed": true, 00:09:29.575 "claim_type": "exclusive_write", 00:09:29.575 "zoned": false, 00:09:29.575 "supported_io_types": { 00:09:29.575 "read": true, 00:09:29.575 "write": true, 00:09:29.575 "unmap": true, 00:09:29.575 "flush": true, 00:09:29.575 "reset": true, 00:09:29.575 "nvme_admin": false, 00:09:29.575 "nvme_io": false, 00:09:29.575 "nvme_io_md": false, 00:09:29.575 "write_zeroes": true, 00:09:29.575 "zcopy": true, 00:09:29.575 "get_zone_info": false, 00:09:29.575 "zone_management": false, 00:09:29.575 "zone_append": false, 00:09:29.575 "compare": false, 00:09:29.575 "compare_and_write": false, 00:09:29.575 "abort": true, 00:09:29.575 "seek_hole": false, 00:09:29.575 "seek_data": false, 00:09:29.575 "copy": true, 00:09:29.575 "nvme_iov_md": false 00:09:29.575 }, 00:09:29.575 "memory_domains": [ 00:09:29.575 { 00:09:29.575 "dma_device_id": "system", 00:09:29.575 "dma_device_type": 1 00:09:29.575 }, 00:09:29.575 { 00:09:29.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.575 "dma_device_type": 2 00:09:29.575 } 00:09:29.575 ], 00:09:29.575 "driver_specific": {} 00:09:29.575 } 00:09:29.575 ] 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.575 "name": "Existed_Raid", 00:09:29.575 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:29.575 "strip_size_kb": 64, 00:09:29.575 "state": "online", 00:09:29.575 "raid_level": "concat", 00:09:29.575 "superblock": true, 00:09:29.575 "num_base_bdevs": 3, 00:09:29.575 "num_base_bdevs_discovered": 3, 00:09:29.575 "num_base_bdevs_operational": 3, 00:09:29.575 "base_bdevs_list": [ 00:09:29.575 { 00:09:29.575 "name": "NewBaseBdev", 00:09:29.575 "uuid": "f08bb126-2114-42f5-99f4-42de29e9dff5", 00:09:29.575 "is_configured": true, 00:09:29.575 "data_offset": 2048, 00:09:29.575 "data_size": 63488 00:09:29.575 }, 00:09:29.575 { 00:09:29.575 "name": "BaseBdev2", 00:09:29.575 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:29.575 "is_configured": true, 00:09:29.575 "data_offset": 2048, 00:09:29.575 "data_size": 63488 00:09:29.575 }, 00:09:29.575 { 00:09:29.575 "name": "BaseBdev3", 00:09:29.575 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:29.575 "is_configured": true, 00:09:29.575 "data_offset": 2048, 00:09:29.575 "data_size": 63488 00:09:29.575 } 00:09:29.575 ] 00:09:29.575 }' 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.575 18:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.144 [2024-11-26 18:56:21.303856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.144 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.144 "name": "Existed_Raid", 00:09:30.144 "aliases": [ 00:09:30.144 "bf91f76a-7e64-4f96-9212-11eb11e6e6d5" 00:09:30.144 ], 00:09:30.144 "product_name": "Raid Volume", 00:09:30.144 "block_size": 512, 00:09:30.144 "num_blocks": 190464, 00:09:30.144 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:30.144 "assigned_rate_limits": { 00:09:30.144 "rw_ios_per_sec": 0, 00:09:30.144 "rw_mbytes_per_sec": 0, 00:09:30.144 "r_mbytes_per_sec": 0, 00:09:30.144 "w_mbytes_per_sec": 0 00:09:30.144 }, 00:09:30.145 "claimed": false, 00:09:30.145 "zoned": false, 00:09:30.145 "supported_io_types": { 00:09:30.145 "read": true, 00:09:30.145 "write": true, 00:09:30.145 "unmap": true, 00:09:30.145 "flush": true, 00:09:30.145 "reset": true, 00:09:30.145 "nvme_admin": false, 00:09:30.145 "nvme_io": false, 00:09:30.145 "nvme_io_md": false, 00:09:30.145 "write_zeroes": true, 00:09:30.145 "zcopy": false, 00:09:30.145 "get_zone_info": false, 00:09:30.145 "zone_management": false, 00:09:30.145 "zone_append": false, 00:09:30.145 "compare": false, 00:09:30.145 "compare_and_write": false, 00:09:30.145 "abort": false, 00:09:30.145 "seek_hole": false, 00:09:30.145 "seek_data": false, 00:09:30.145 "copy": false, 00:09:30.145 "nvme_iov_md": false 00:09:30.145 }, 00:09:30.145 "memory_domains": [ 00:09:30.145 { 00:09:30.145 "dma_device_id": "system", 00:09:30.145 "dma_device_type": 1 00:09:30.145 }, 00:09:30.145 { 00:09:30.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.145 "dma_device_type": 2 00:09:30.145 }, 00:09:30.145 { 00:09:30.145 "dma_device_id": "system", 00:09:30.145 "dma_device_type": 1 00:09:30.145 }, 00:09:30.145 { 00:09:30.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.145 "dma_device_type": 2 00:09:30.145 }, 00:09:30.145 { 00:09:30.145 "dma_device_id": "system", 00:09:30.145 "dma_device_type": 1 00:09:30.145 }, 00:09:30.145 { 00:09:30.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.145 "dma_device_type": 2 00:09:30.145 } 00:09:30.145 ], 00:09:30.145 "driver_specific": { 00:09:30.145 "raid": { 00:09:30.145 "uuid": "bf91f76a-7e64-4f96-9212-11eb11e6e6d5", 00:09:30.145 "strip_size_kb": 64, 00:09:30.145 "state": "online", 00:09:30.145 "raid_level": "concat", 00:09:30.145 "superblock": true, 00:09:30.145 "num_base_bdevs": 3, 00:09:30.145 "num_base_bdevs_discovered": 3, 00:09:30.145 "num_base_bdevs_operational": 3, 00:09:30.145 "base_bdevs_list": [ 00:09:30.145 { 00:09:30.145 "name": "NewBaseBdev", 00:09:30.145 "uuid": "f08bb126-2114-42f5-99f4-42de29e9dff5", 00:09:30.145 "is_configured": true, 00:09:30.145 "data_offset": 2048, 00:09:30.145 "data_size": 63488 00:09:30.145 }, 00:09:30.145 { 00:09:30.145 "name": "BaseBdev2", 00:09:30.145 "uuid": "07a3d620-1e51-4eb8-847d-9196ddddeeee", 00:09:30.145 "is_configured": true, 00:09:30.145 "data_offset": 2048, 00:09:30.145 "data_size": 63488 00:09:30.145 }, 00:09:30.145 { 00:09:30.145 "name": "BaseBdev3", 00:09:30.145 "uuid": "5f86be55-cd70-4eca-a040-8405a8a5d010", 00:09:30.145 "is_configured": true, 00:09:30.145 "data_offset": 2048, 00:09:30.145 "data_size": 63488 00:09:30.145 } 00:09:30.145 ] 00:09:30.145 } 00:09:30.145 } 00:09:30.145 }' 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:30.145 BaseBdev2 00:09:30.145 BaseBdev3' 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.145 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.404 [2024-11-26 18:56:21.651344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.404 [2024-11-26 18:56:21.651380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.404 [2024-11-26 18:56:21.651498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.404 [2024-11-26 18:56:21.651576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.404 [2024-11-26 18:56:21.651597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66287 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66287 ']' 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66287 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66287 00:09:30.404 killing process with pid 66287 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66287' 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66287 00:09:30.404 18:56:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66287 00:09:30.404 [2024-11-26 18:56:21.690827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.662 [2024-11-26 18:56:21.968856] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.046 18:56:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:32.046 00:09:32.046 real 0m12.251s 00:09:32.046 user 0m20.356s 00:09:32.046 sys 0m1.667s 00:09:32.046 ************************************ 00:09:32.046 END TEST raid_state_function_test_sb 00:09:32.046 ************************************ 00:09:32.046 18:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.046 18:56:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.046 18:56:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:32.046 18:56:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:32.046 18:56:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.046 18:56:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.046 ************************************ 00:09:32.046 START TEST raid_superblock_test 00:09:32.046 ************************************ 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66925 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66925 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66925 ']' 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.046 18:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.046 [2024-11-26 18:56:23.184281] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:09:32.046 [2024-11-26 18:56:23.184507] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66925 ] 00:09:32.046 [2024-11-26 18:56:23.367985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.304 [2024-11-26 18:56:23.502258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.562 [2024-11-26 18:56:23.707629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.562 [2024-11-26 18:56:23.707866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.820 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.079 malloc1 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.079 [2024-11-26 18:56:24.197147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:33.079 [2024-11-26 18:56:24.197221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.079 [2024-11-26 18:56:24.197267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:33.079 [2024-11-26 18:56:24.197284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.079 [2024-11-26 18:56:24.200239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.079 [2024-11-26 18:56:24.200285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:33.079 pt1 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.079 malloc2 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.079 [2024-11-26 18:56:24.253705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.079 [2024-11-26 18:56:24.253775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.079 [2024-11-26 18:56:24.253816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:33.079 [2024-11-26 18:56:24.253833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.079 [2024-11-26 18:56:24.256638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.079 [2024-11-26 18:56:24.256682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.079 pt2 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.079 malloc3 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.079 [2024-11-26 18:56:24.319577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:33.079 [2024-11-26 18:56:24.319642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.079 [2024-11-26 18:56:24.319678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:33.079 [2024-11-26 18:56:24.319694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.079 [2024-11-26 18:56:24.322590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.079 [2024-11-26 18:56:24.322633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:33.079 pt3 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.079 [2024-11-26 18:56:24.331656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:33.079 [2024-11-26 18:56:24.334357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.079 [2024-11-26 18:56:24.334598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:33.079 [2024-11-26 18:56:24.334969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:33.079 [2024-11-26 18:56:24.334999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:33.079 [2024-11-26 18:56:24.335363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:33.079 [2024-11-26 18:56:24.335573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:33.079 [2024-11-26 18:56:24.335589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:33.079 [2024-11-26 18:56:24.335845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.079 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.080 "name": "raid_bdev1", 00:09:33.080 "uuid": "240bcffc-e53d-4b96-9fcc-cb2b38e79f77", 00:09:33.080 "strip_size_kb": 64, 00:09:33.080 "state": "online", 00:09:33.080 "raid_level": "concat", 00:09:33.080 "superblock": true, 00:09:33.080 "num_base_bdevs": 3, 00:09:33.080 "num_base_bdevs_discovered": 3, 00:09:33.080 "num_base_bdevs_operational": 3, 00:09:33.080 "base_bdevs_list": [ 00:09:33.080 { 00:09:33.080 "name": "pt1", 00:09:33.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.080 "is_configured": true, 00:09:33.080 "data_offset": 2048, 00:09:33.080 "data_size": 63488 00:09:33.080 }, 00:09:33.080 { 00:09:33.080 "name": "pt2", 00:09:33.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.080 "is_configured": true, 00:09:33.080 "data_offset": 2048, 00:09:33.080 "data_size": 63488 00:09:33.080 }, 00:09:33.080 { 00:09:33.080 "name": "pt3", 00:09:33.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.080 "is_configured": true, 00:09:33.080 "data_offset": 2048, 00:09:33.080 "data_size": 63488 00:09:33.080 } 00:09:33.080 ] 00:09:33.080 }' 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.080 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.647 [2024-11-26 18:56:24.876402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.647 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.647 "name": "raid_bdev1", 00:09:33.647 "aliases": [ 00:09:33.647 "240bcffc-e53d-4b96-9fcc-cb2b38e79f77" 00:09:33.647 ], 00:09:33.647 "product_name": "Raid Volume", 00:09:33.647 "block_size": 512, 00:09:33.647 "num_blocks": 190464, 00:09:33.647 "uuid": "240bcffc-e53d-4b96-9fcc-cb2b38e79f77", 00:09:33.647 "assigned_rate_limits": { 00:09:33.647 "rw_ios_per_sec": 0, 00:09:33.647 "rw_mbytes_per_sec": 0, 00:09:33.647 "r_mbytes_per_sec": 0, 00:09:33.647 "w_mbytes_per_sec": 0 00:09:33.647 }, 00:09:33.647 "claimed": false, 00:09:33.647 "zoned": false, 00:09:33.647 "supported_io_types": { 00:09:33.647 "read": true, 00:09:33.647 "write": true, 00:09:33.647 "unmap": true, 00:09:33.647 "flush": true, 00:09:33.647 "reset": true, 00:09:33.647 "nvme_admin": false, 00:09:33.647 "nvme_io": false, 00:09:33.647 "nvme_io_md": false, 00:09:33.647 "write_zeroes": true, 00:09:33.647 "zcopy": false, 00:09:33.647 "get_zone_info": false, 00:09:33.647 "zone_management": false, 00:09:33.647 "zone_append": false, 00:09:33.647 "compare": false, 00:09:33.647 "compare_and_write": false, 00:09:33.647 "abort": false, 00:09:33.647 "seek_hole": false, 00:09:33.647 "seek_data": false, 00:09:33.647 "copy": false, 00:09:33.647 "nvme_iov_md": false 00:09:33.647 }, 00:09:33.647 "memory_domains": [ 00:09:33.647 { 00:09:33.647 "dma_device_id": "system", 00:09:33.647 "dma_device_type": 1 00:09:33.647 }, 00:09:33.647 { 00:09:33.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.647 "dma_device_type": 2 00:09:33.647 }, 00:09:33.647 { 00:09:33.647 "dma_device_id": "system", 00:09:33.647 "dma_device_type": 1 00:09:33.648 }, 00:09:33.648 { 00:09:33.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.648 "dma_device_type": 2 00:09:33.648 }, 00:09:33.648 { 00:09:33.648 "dma_device_id": "system", 00:09:33.648 "dma_device_type": 1 00:09:33.648 }, 00:09:33.648 { 00:09:33.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.648 "dma_device_type": 2 00:09:33.648 } 00:09:33.648 ], 00:09:33.648 "driver_specific": { 00:09:33.648 "raid": { 00:09:33.648 "uuid": "240bcffc-e53d-4b96-9fcc-cb2b38e79f77", 00:09:33.648 "strip_size_kb": 64, 00:09:33.648 "state": "online", 00:09:33.648 "raid_level": "concat", 00:09:33.648 "superblock": true, 00:09:33.648 "num_base_bdevs": 3, 00:09:33.648 "num_base_bdevs_discovered": 3, 00:09:33.648 "num_base_bdevs_operational": 3, 00:09:33.648 "base_bdevs_list": [ 00:09:33.648 { 00:09:33.648 "name": "pt1", 00:09:33.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.648 "is_configured": true, 00:09:33.648 "data_offset": 2048, 00:09:33.648 "data_size": 63488 00:09:33.648 }, 00:09:33.648 { 00:09:33.648 "name": "pt2", 00:09:33.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.648 "is_configured": true, 00:09:33.648 "data_offset": 2048, 00:09:33.648 "data_size": 63488 00:09:33.648 }, 00:09:33.648 { 00:09:33.648 "name": "pt3", 00:09:33.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.648 "is_configured": true, 00:09:33.648 "data_offset": 2048, 00:09:33.648 "data_size": 63488 00:09:33.648 } 00:09:33.648 ] 00:09:33.648 } 00:09:33.648 } 00:09:33.648 }' 00:09:33.648 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.648 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:33.648 pt2 00:09:33.648 pt3' 00:09:33.648 18:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.906 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.907 [2024-11-26 18:56:25.180455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=240bcffc-e53d-4b96-9fcc-cb2b38e79f77 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 240bcffc-e53d-4b96-9fcc-cb2b38e79f77 ']' 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.907 [2024-11-26 18:56:25.224114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.907 [2024-11-26 18:56:25.224269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.907 [2024-11-26 18:56:25.224398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.907 [2024-11-26 18:56:25.224484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.907 [2024-11-26 18:56:25.224500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.907 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.166 [2024-11-26 18:56:25.372255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:34.166 [2024-11-26 18:56:25.374985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:34.166 [2024-11-26 18:56:25.375069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:34.166 [2024-11-26 18:56:25.375146] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:34.166 [2024-11-26 18:56:25.375236] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:34.166 [2024-11-26 18:56:25.375274] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:34.166 [2024-11-26 18:56:25.375301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.166 [2024-11-26 18:56:25.375315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:34.166 request: 00:09:34.166 { 00:09:34.166 "name": "raid_bdev1", 00:09:34.166 "raid_level": "concat", 00:09:34.166 "base_bdevs": [ 00:09:34.166 "malloc1", 00:09:34.166 "malloc2", 00:09:34.166 "malloc3" 00:09:34.166 ], 00:09:34.166 "strip_size_kb": 64, 00:09:34.166 "superblock": false, 00:09:34.166 "method": "bdev_raid_create", 00:09:34.166 "req_id": 1 00:09:34.166 } 00:09:34.166 Got JSON-RPC error response 00:09:34.166 response: 00:09:34.166 { 00:09:34.166 "code": -17, 00:09:34.166 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:34.166 } 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.166 [2024-11-26 18:56:25.440345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.166 [2024-11-26 18:56:25.440571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.166 [2024-11-26 18:56:25.440645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:34.166 [2024-11-26 18:56:25.440754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.166 [2024-11-26 18:56:25.443931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.166 [2024-11-26 18:56:25.444095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.166 [2024-11-26 18:56:25.444316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:34.166 pt1 00:09:34.166 [2024-11-26 18:56:25.444485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.166 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.167 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.167 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.167 "name": "raid_bdev1", 00:09:34.167 "uuid": "240bcffc-e53d-4b96-9fcc-cb2b38e79f77", 00:09:34.167 "strip_size_kb": 64, 00:09:34.167 "state": "configuring", 00:09:34.167 "raid_level": "concat", 00:09:34.167 "superblock": true, 00:09:34.167 "num_base_bdevs": 3, 00:09:34.167 "num_base_bdevs_discovered": 1, 00:09:34.167 "num_base_bdevs_operational": 3, 00:09:34.167 "base_bdevs_list": [ 00:09:34.167 { 00:09:34.167 "name": "pt1", 00:09:34.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.167 "is_configured": true, 00:09:34.167 "data_offset": 2048, 00:09:34.167 "data_size": 63488 00:09:34.167 }, 00:09:34.167 { 00:09:34.167 "name": null, 00:09:34.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.167 "is_configured": false, 00:09:34.167 "data_offset": 2048, 00:09:34.167 "data_size": 63488 00:09:34.167 }, 00:09:34.167 { 00:09:34.167 "name": null, 00:09:34.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.167 "is_configured": false, 00:09:34.167 "data_offset": 2048, 00:09:34.167 "data_size": 63488 00:09:34.167 } 00:09:34.167 ] 00:09:34.167 }' 00:09:34.167 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.167 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.734 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:34.734 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:34.734 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.734 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.734 [2024-11-26 18:56:25.960639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:34.734 [2024-11-26 18:56:25.960731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.735 [2024-11-26 18:56:25.960776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:34.735 [2024-11-26 18:56:25.960793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.735 [2024-11-26 18:56:25.961406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.735 [2024-11-26 18:56:25.961453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:34.735 [2024-11-26 18:56:25.961581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:34.735 [2024-11-26 18:56:25.961620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.735 pt2 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.735 [2024-11-26 18:56:25.968629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.735 18:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.735 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.735 "name": "raid_bdev1", 00:09:34.735 "uuid": "240bcffc-e53d-4b96-9fcc-cb2b38e79f77", 00:09:34.735 "strip_size_kb": 64, 00:09:34.735 "state": "configuring", 00:09:34.735 "raid_level": "concat", 00:09:34.735 "superblock": true, 00:09:34.735 "num_base_bdevs": 3, 00:09:34.735 "num_base_bdevs_discovered": 1, 00:09:34.735 "num_base_bdevs_operational": 3, 00:09:34.735 "base_bdevs_list": [ 00:09:34.735 { 00:09:34.735 "name": "pt1", 00:09:34.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.735 "is_configured": true, 00:09:34.735 "data_offset": 2048, 00:09:34.735 "data_size": 63488 00:09:34.735 }, 00:09:34.735 { 00:09:34.735 "name": null, 00:09:34.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.735 "is_configured": false, 00:09:34.735 "data_offset": 0, 00:09:34.735 "data_size": 63488 00:09:34.735 }, 00:09:34.735 { 00:09:34.735 "name": null, 00:09:34.735 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.735 "is_configured": false, 00:09:34.735 "data_offset": 2048, 00:09:34.735 "data_size": 63488 00:09:34.735 } 00:09:34.735 ] 00:09:34.735 }' 00:09:34.735 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.735 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.302 [2024-11-26 18:56:26.520769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.302 [2024-11-26 18:56:26.520868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.302 [2024-11-26 18:56:26.520918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:35.302 [2024-11-26 18:56:26.520939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.302 [2024-11-26 18:56:26.521573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.302 [2024-11-26 18:56:26.521603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.302 [2024-11-26 18:56:26.521703] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:35.302 [2024-11-26 18:56:26.521739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.302 pt2 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.302 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.303 [2024-11-26 18:56:26.532733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.303 [2024-11-26 18:56:26.533004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.303 [2024-11-26 18:56:26.533045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:35.303 [2024-11-26 18:56:26.533065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.303 [2024-11-26 18:56:26.533549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.303 [2024-11-26 18:56:26.533592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.303 [2024-11-26 18:56:26.533674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:35.303 [2024-11-26 18:56:26.533708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:35.303 [2024-11-26 18:56:26.533871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.303 [2024-11-26 18:56:26.533890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.303 [2024-11-26 18:56:26.534241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:35.303 [2024-11-26 18:56:26.534440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.303 [2024-11-26 18:56:26.534456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:35.303 [2024-11-26 18:56:26.534632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.303 pt3 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.303 "name": "raid_bdev1", 00:09:35.303 "uuid": "240bcffc-e53d-4b96-9fcc-cb2b38e79f77", 00:09:35.303 "strip_size_kb": 64, 00:09:35.303 "state": "online", 00:09:35.303 "raid_level": "concat", 00:09:35.303 "superblock": true, 00:09:35.303 "num_base_bdevs": 3, 00:09:35.303 "num_base_bdevs_discovered": 3, 00:09:35.303 "num_base_bdevs_operational": 3, 00:09:35.303 "base_bdevs_list": [ 00:09:35.303 { 00:09:35.303 "name": "pt1", 00:09:35.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.303 "is_configured": true, 00:09:35.303 "data_offset": 2048, 00:09:35.303 "data_size": 63488 00:09:35.303 }, 00:09:35.303 { 00:09:35.303 "name": "pt2", 00:09:35.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.303 "is_configured": true, 00:09:35.303 "data_offset": 2048, 00:09:35.303 "data_size": 63488 00:09:35.303 }, 00:09:35.303 { 00:09:35.303 "name": "pt3", 00:09:35.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.303 "is_configured": true, 00:09:35.303 "data_offset": 2048, 00:09:35.303 "data_size": 63488 00:09:35.303 } 00:09:35.303 ] 00:09:35.303 }' 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.303 18:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.871 [2024-11-26 18:56:27.109426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.871 "name": "raid_bdev1", 00:09:35.871 "aliases": [ 00:09:35.871 "240bcffc-e53d-4b96-9fcc-cb2b38e79f77" 00:09:35.871 ], 00:09:35.871 "product_name": "Raid Volume", 00:09:35.871 "block_size": 512, 00:09:35.871 "num_blocks": 190464, 00:09:35.871 "uuid": "240bcffc-e53d-4b96-9fcc-cb2b38e79f77", 00:09:35.871 "assigned_rate_limits": { 00:09:35.871 "rw_ios_per_sec": 0, 00:09:35.871 "rw_mbytes_per_sec": 0, 00:09:35.871 "r_mbytes_per_sec": 0, 00:09:35.871 "w_mbytes_per_sec": 0 00:09:35.871 }, 00:09:35.871 "claimed": false, 00:09:35.871 "zoned": false, 00:09:35.871 "supported_io_types": { 00:09:35.871 "read": true, 00:09:35.871 "write": true, 00:09:35.871 "unmap": true, 00:09:35.871 "flush": true, 00:09:35.871 "reset": true, 00:09:35.871 "nvme_admin": false, 00:09:35.871 "nvme_io": false, 00:09:35.871 "nvme_io_md": false, 00:09:35.871 "write_zeroes": true, 00:09:35.871 "zcopy": false, 00:09:35.871 "get_zone_info": false, 00:09:35.871 "zone_management": false, 00:09:35.871 "zone_append": false, 00:09:35.871 "compare": false, 00:09:35.871 "compare_and_write": false, 00:09:35.871 "abort": false, 00:09:35.871 "seek_hole": false, 00:09:35.871 "seek_data": false, 00:09:35.871 "copy": false, 00:09:35.871 "nvme_iov_md": false 00:09:35.871 }, 00:09:35.871 "memory_domains": [ 00:09:35.871 { 00:09:35.871 "dma_device_id": "system", 00:09:35.871 "dma_device_type": 1 00:09:35.871 }, 00:09:35.871 { 00:09:35.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.871 "dma_device_type": 2 00:09:35.871 }, 00:09:35.871 { 00:09:35.871 "dma_device_id": "system", 00:09:35.871 "dma_device_type": 1 00:09:35.871 }, 00:09:35.871 { 00:09:35.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.871 "dma_device_type": 2 00:09:35.871 }, 00:09:35.871 { 00:09:35.871 "dma_device_id": "system", 00:09:35.871 "dma_device_type": 1 00:09:35.871 }, 00:09:35.871 { 00:09:35.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.871 "dma_device_type": 2 00:09:35.871 } 00:09:35.871 ], 00:09:35.871 "driver_specific": { 00:09:35.871 "raid": { 00:09:35.871 "uuid": "240bcffc-e53d-4b96-9fcc-cb2b38e79f77", 00:09:35.871 "strip_size_kb": 64, 00:09:35.871 "state": "online", 00:09:35.871 "raid_level": "concat", 00:09:35.871 "superblock": true, 00:09:35.871 "num_base_bdevs": 3, 00:09:35.871 "num_base_bdevs_discovered": 3, 00:09:35.871 "num_base_bdevs_operational": 3, 00:09:35.871 "base_bdevs_list": [ 00:09:35.871 { 00:09:35.871 "name": "pt1", 00:09:35.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.871 "is_configured": true, 00:09:35.871 "data_offset": 2048, 00:09:35.871 "data_size": 63488 00:09:35.871 }, 00:09:35.871 { 00:09:35.871 "name": "pt2", 00:09:35.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.871 "is_configured": true, 00:09:35.871 "data_offset": 2048, 00:09:35.871 "data_size": 63488 00:09:35.871 }, 00:09:35.871 { 00:09:35.871 "name": "pt3", 00:09:35.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.871 "is_configured": true, 00:09:35.871 "data_offset": 2048, 00:09:35.871 "data_size": 63488 00:09:35.871 } 00:09:35.871 ] 00:09:35.871 } 00:09:35.871 } 00:09:35.871 }' 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:35.871 pt2 00:09:35.871 pt3' 00:09:35.871 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.130 [2024-11-26 18:56:27.445430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 240bcffc-e53d-4b96-9fcc-cb2b38e79f77 '!=' 240bcffc-e53d-4b96-9fcc-cb2b38e79f77 ']' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66925 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66925 ']' 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66925 00:09:36.130 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:36.390 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.390 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66925 00:09:36.390 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.390 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.390 killing process with pid 66925 00:09:36.390 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66925' 00:09:36.390 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66925 00:09:36.390 [2024-11-26 18:56:27.525103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.390 18:56:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66925 00:09:36.390 [2024-11-26 18:56:27.525223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.390 [2024-11-26 18:56:27.525306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.390 [2024-11-26 18:56:27.525541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:36.648 [2024-11-26 18:56:27.805991] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.584 18:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:37.584 00:09:37.584 real 0m5.801s 00:09:37.584 user 0m8.744s 00:09:37.584 sys 0m0.856s 00:09:37.584 ************************************ 00:09:37.584 END TEST raid_superblock_test 00:09:37.584 ************************************ 00:09:37.584 18:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.584 18:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.584 18:56:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:37.584 18:56:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.584 18:56:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.584 18:56:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.584 ************************************ 00:09:37.584 START TEST raid_read_error_test 00:09:37.584 ************************************ 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yykhRNdTGc 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67189 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67189 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67189 ']' 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.584 18:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.843 [2024-11-26 18:56:29.044931] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:09:37.844 [2024-11-26 18:56:29.045329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67189 ] 00:09:38.103 [2024-11-26 18:56:29.222647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.103 [2024-11-26 18:56:29.357529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.361 [2024-11-26 18:56:29.569138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.361 [2024-11-26 18:56:29.569215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 BaseBdev1_malloc 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 true 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 [2024-11-26 18:56:30.131005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:38.938 [2024-11-26 18:56:30.131075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.938 [2024-11-26 18:56:30.131106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:38.938 [2024-11-26 18:56:30.131125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.938 [2024-11-26 18:56:30.133953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.938 [2024-11-26 18:56:30.134138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:38.938 BaseBdev1 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 BaseBdev2_malloc 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 true 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 [2024-11-26 18:56:30.187244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.938 [2024-11-26 18:56:30.187317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.938 [2024-11-26 18:56:30.187344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:38.938 [2024-11-26 18:56:30.187361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.938 [2024-11-26 18:56:30.190213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.938 [2024-11-26 18:56:30.190263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.938 BaseBdev2 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 BaseBdev3_malloc 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 true 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 [2024-11-26 18:56:30.257207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:38.938 [2024-11-26 18:56:30.257292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.938 [2024-11-26 18:56:30.257322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:38.938 [2024-11-26 18:56:30.257340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.938 [2024-11-26 18:56:30.260356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.938 [2024-11-26 18:56:30.260407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:38.938 BaseBdev3 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 [2024-11-26 18:56:30.265343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.938 [2024-11-26 18:56:30.267975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.938 [2024-11-26 18:56:30.268081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.938 [2024-11-26 18:56:30.268362] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:38.938 [2024-11-26 18:56:30.268380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:38.938 [2024-11-26 18:56:30.268767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:38.938 [2024-11-26 18:56:30.269025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:38.938 [2024-11-26 18:56:30.269049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:38.938 [2024-11-26 18:56:30.269324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.938 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.197 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.197 "name": "raid_bdev1", 00:09:39.197 "uuid": "0c2f4de9-933c-4c52-955c-ca4d716f831d", 00:09:39.197 "strip_size_kb": 64, 00:09:39.197 "state": "online", 00:09:39.197 "raid_level": "concat", 00:09:39.197 "superblock": true, 00:09:39.197 "num_base_bdevs": 3, 00:09:39.197 "num_base_bdevs_discovered": 3, 00:09:39.197 "num_base_bdevs_operational": 3, 00:09:39.197 "base_bdevs_list": [ 00:09:39.197 { 00:09:39.197 "name": "BaseBdev1", 00:09:39.197 "uuid": "e2d8d2ca-3b01-5735-b928-9020160abd94", 00:09:39.197 "is_configured": true, 00:09:39.197 "data_offset": 2048, 00:09:39.197 "data_size": 63488 00:09:39.197 }, 00:09:39.197 { 00:09:39.197 "name": "BaseBdev2", 00:09:39.197 "uuid": "c88a55b4-0959-58b0-926d-c1f5bac523ec", 00:09:39.197 "is_configured": true, 00:09:39.197 "data_offset": 2048, 00:09:39.197 "data_size": 63488 00:09:39.197 }, 00:09:39.197 { 00:09:39.197 "name": "BaseBdev3", 00:09:39.197 "uuid": "97f3eab7-2b05-515a-885a-4e82b5900d76", 00:09:39.197 "is_configured": true, 00:09:39.197 "data_offset": 2048, 00:09:39.197 "data_size": 63488 00:09:39.197 } 00:09:39.197 ] 00:09:39.197 }' 00:09:39.197 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.197 18:56:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.456 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:39.456 18:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:39.716 [2024-11-26 18:56:30.915116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.652 "name": "raid_bdev1", 00:09:40.652 "uuid": "0c2f4de9-933c-4c52-955c-ca4d716f831d", 00:09:40.652 "strip_size_kb": 64, 00:09:40.652 "state": "online", 00:09:40.652 "raid_level": "concat", 00:09:40.652 "superblock": true, 00:09:40.652 "num_base_bdevs": 3, 00:09:40.652 "num_base_bdevs_discovered": 3, 00:09:40.652 "num_base_bdevs_operational": 3, 00:09:40.652 "base_bdevs_list": [ 00:09:40.652 { 00:09:40.652 "name": "BaseBdev1", 00:09:40.652 "uuid": "e2d8d2ca-3b01-5735-b928-9020160abd94", 00:09:40.652 "is_configured": true, 00:09:40.652 "data_offset": 2048, 00:09:40.652 "data_size": 63488 00:09:40.652 }, 00:09:40.652 { 00:09:40.652 "name": "BaseBdev2", 00:09:40.652 "uuid": "c88a55b4-0959-58b0-926d-c1f5bac523ec", 00:09:40.652 "is_configured": true, 00:09:40.652 "data_offset": 2048, 00:09:40.652 "data_size": 63488 00:09:40.652 }, 00:09:40.652 { 00:09:40.652 "name": "BaseBdev3", 00:09:40.652 "uuid": "97f3eab7-2b05-515a-885a-4e82b5900d76", 00:09:40.652 "is_configured": true, 00:09:40.652 "data_offset": 2048, 00:09:40.652 "data_size": 63488 00:09:40.652 } 00:09:40.652 ] 00:09:40.652 }' 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.652 18:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.220 [2024-11-26 18:56:32.322821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.220 [2024-11-26 18:56:32.323013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.220 [2024-11-26 18:56:32.326509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.220 [2024-11-26 18:56:32.326698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.220 [2024-11-26 18:56:32.326768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.220 [2024-11-26 18:56:32.326784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:41.220 { 00:09:41.220 "results": [ 00:09:41.220 { 00:09:41.220 "job": "raid_bdev1", 00:09:41.220 "core_mask": "0x1", 00:09:41.220 "workload": "randrw", 00:09:41.220 "percentage": 50, 00:09:41.220 "status": "finished", 00:09:41.220 "queue_depth": 1, 00:09:41.220 "io_size": 131072, 00:09:41.220 "runtime": 1.405407, 00:09:41.220 "iops": 10384.180525641326, 00:09:41.220 "mibps": 1298.0225657051658, 00:09:41.220 "io_failed": 1, 00:09:41.220 "io_timeout": 0, 00:09:41.220 "avg_latency_us": 134.52187287053476, 00:09:41.220 "min_latency_us": 39.33090909090909, 00:09:41.220 "max_latency_us": 1846.9236363636364 00:09:41.220 } 00:09:41.220 ], 00:09:41.220 "core_count": 1 00:09:41.220 } 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67189 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67189 ']' 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67189 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67189 00:09:41.220 killing process with pid 67189 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67189' 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67189 00:09:41.220 [2024-11-26 18:56:32.361549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.220 18:56:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67189 00:09:41.220 [2024-11-26 18:56:32.572547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yykhRNdTGc 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:42.597 00:09:42.597 real 0m4.813s 00:09:42.597 user 0m5.974s 00:09:42.597 sys 0m0.597s 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.597 18:56:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.597 ************************************ 00:09:42.597 END TEST raid_read_error_test 00:09:42.597 ************************************ 00:09:42.597 18:56:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:42.597 18:56:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:42.597 18:56:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.597 18:56:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.597 ************************************ 00:09:42.597 START TEST raid_write_error_test 00:09:42.597 ************************************ 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bu3BWA54bP 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67329 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67329 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67329 ']' 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.597 18:56:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.597 [2024-11-26 18:56:33.943328] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:09:42.597 [2024-11-26 18:56:33.943817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67329 ] 00:09:42.856 [2024-11-26 18:56:34.148160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.115 [2024-11-26 18:56:34.327223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.375 [2024-11-26 18:56:34.601105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.375 [2024-11-26 18:56:34.601192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.634 18:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.635 18:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:43.635 18:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.635 18:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.635 18:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.635 18:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.635 BaseBdev1_malloc 00:09:43.635 18:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.635 18:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.635 18:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.635 18:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.894 true 00:09:43.894 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.894 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.894 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.894 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.894 [2024-11-26 18:56:35.009223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.895 [2024-11-26 18:56:35.009303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.895 [2024-11-26 18:56:35.009332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.895 [2024-11-26 18:56:35.009350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.895 [2024-11-26 18:56:35.012236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.895 [2024-11-26 18:56:35.012300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.895 BaseBdev1 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 BaseBdev2_malloc 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 true 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 [2024-11-26 18:56:35.070203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:43.895 [2024-11-26 18:56:35.070300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.895 [2024-11-26 18:56:35.070324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:43.895 [2024-11-26 18:56:35.070342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.895 [2024-11-26 18:56:35.073277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.895 [2024-11-26 18:56:35.073530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:43.895 BaseBdev2 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 BaseBdev3_malloc 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 true 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 [2024-11-26 18:56:35.142822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:43.895 [2024-11-26 18:56:35.142903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.895 [2024-11-26 18:56:35.142932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:43.895 [2024-11-26 18:56:35.142951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.895 [2024-11-26 18:56:35.145754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.895 [2024-11-26 18:56:35.145805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:43.895 BaseBdev3 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 [2024-11-26 18:56:35.154956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.895 [2024-11-26 18:56:35.157395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.895 [2024-11-26 18:56:35.157499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.895 [2024-11-26 18:56:35.157766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:43.895 [2024-11-26 18:56:35.157785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:43.895 [2024-11-26 18:56:35.158129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:43.895 [2024-11-26 18:56:35.158358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:43.895 [2024-11-26 18:56:35.158388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:43.895 [2024-11-26 18:56:35.158571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.895 "name": "raid_bdev1", 00:09:43.895 "uuid": "2c9b3847-369f-41f7-bd99-f0914d81aeba", 00:09:43.895 "strip_size_kb": 64, 00:09:43.895 "state": "online", 00:09:43.895 "raid_level": "concat", 00:09:43.895 "superblock": true, 00:09:43.895 "num_base_bdevs": 3, 00:09:43.895 "num_base_bdevs_discovered": 3, 00:09:43.895 "num_base_bdevs_operational": 3, 00:09:43.895 "base_bdevs_list": [ 00:09:43.895 { 00:09:43.895 "name": "BaseBdev1", 00:09:43.895 "uuid": "86bb58a3-d394-540d-aa35-151b0a5c77a6", 00:09:43.895 "is_configured": true, 00:09:43.895 "data_offset": 2048, 00:09:43.895 "data_size": 63488 00:09:43.895 }, 00:09:43.895 { 00:09:43.895 "name": "BaseBdev2", 00:09:43.895 "uuid": "dc08884a-c5ef-5ea7-97b9-fc3c26ca2867", 00:09:43.895 "is_configured": true, 00:09:43.895 "data_offset": 2048, 00:09:43.895 "data_size": 63488 00:09:43.895 }, 00:09:43.895 { 00:09:43.895 "name": "BaseBdev3", 00:09:43.895 "uuid": "05f09e04-94e1-5556-a343-cadddd362058", 00:09:43.895 "is_configured": true, 00:09:43.895 "data_offset": 2048, 00:09:43.895 "data_size": 63488 00:09:43.895 } 00:09:43.895 ] 00:09:43.895 }' 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.895 18:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.463 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:44.463 18:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.463 [2024-11-26 18:56:35.820633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.397 18:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.655 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.655 "name": "raid_bdev1", 00:09:45.655 "uuid": "2c9b3847-369f-41f7-bd99-f0914d81aeba", 00:09:45.655 "strip_size_kb": 64, 00:09:45.655 "state": "online", 00:09:45.655 "raid_level": "concat", 00:09:45.655 "superblock": true, 00:09:45.655 "num_base_bdevs": 3, 00:09:45.655 "num_base_bdevs_discovered": 3, 00:09:45.655 "num_base_bdevs_operational": 3, 00:09:45.655 "base_bdevs_list": [ 00:09:45.655 { 00:09:45.655 "name": "BaseBdev1", 00:09:45.655 "uuid": "86bb58a3-d394-540d-aa35-151b0a5c77a6", 00:09:45.655 "is_configured": true, 00:09:45.655 "data_offset": 2048, 00:09:45.655 "data_size": 63488 00:09:45.655 }, 00:09:45.655 { 00:09:45.655 "name": "BaseBdev2", 00:09:45.655 "uuid": "dc08884a-c5ef-5ea7-97b9-fc3c26ca2867", 00:09:45.655 "is_configured": true, 00:09:45.655 "data_offset": 2048, 00:09:45.655 "data_size": 63488 00:09:45.655 }, 00:09:45.655 { 00:09:45.655 "name": "BaseBdev3", 00:09:45.655 "uuid": "05f09e04-94e1-5556-a343-cadddd362058", 00:09:45.655 "is_configured": true, 00:09:45.655 "data_offset": 2048, 00:09:45.655 "data_size": 63488 00:09:45.655 } 00:09:45.655 ] 00:09:45.655 }' 00:09:45.655 18:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.655 18:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.914 [2024-11-26 18:56:37.240033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.914 [2024-11-26 18:56:37.240203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.914 [2024-11-26 18:56:37.243719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.914 [2024-11-26 18:56:37.243918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.914 { 00:09:45.914 "results": [ 00:09:45.914 { 00:09:45.914 "job": "raid_bdev1", 00:09:45.914 "core_mask": "0x1", 00:09:45.914 "workload": "randrw", 00:09:45.914 "percentage": 50, 00:09:45.914 "status": "finished", 00:09:45.914 "queue_depth": 1, 00:09:45.914 "io_size": 131072, 00:09:45.914 "runtime": 1.417078, 00:09:45.914 "iops": 10738.999546955072, 00:09:45.914 "mibps": 1342.374943369384, 00:09:45.914 "io_failed": 1, 00:09:45.914 "io_timeout": 0, 00:09:45.914 "avg_latency_us": 130.00058443691793, 00:09:45.914 "min_latency_us": 39.09818181818182, 00:09:45.914 "max_latency_us": 1876.7127272727273 00:09:45.914 } 00:09:45.914 ], 00:09:45.914 "core_count": 1 00:09:45.914 } 00:09:45.914 [2024-11-26 18:56:37.244024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.914 [2024-11-26 18:56:37.244050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67329 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67329 ']' 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67329 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.914 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67329 00:09:46.172 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.172 killing process with pid 67329 00:09:46.172 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.172 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67329' 00:09:46.172 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67329 00:09:46.172 [2024-11-26 18:56:37.284694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.172 18:56:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67329 00:09:46.172 [2024-11-26 18:56:37.494681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bu3BWA54bP 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:47.548 00:09:47.548 real 0m4.844s 00:09:47.548 user 0m6.038s 00:09:47.548 sys 0m0.610s 00:09:47.548 ************************************ 00:09:47.548 END TEST raid_write_error_test 00:09:47.548 ************************************ 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.548 18:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.548 18:56:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:47.548 18:56:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:47.548 18:56:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:47.548 18:56:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.548 18:56:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.548 ************************************ 00:09:47.548 START TEST raid_state_function_test 00:09:47.548 ************************************ 00:09:47.548 18:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:47.548 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:47.548 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:47.548 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:47.548 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:47.548 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:47.548 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.548 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:47.549 Process raid pid: 67478 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67478 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67478' 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67478 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67478 ']' 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.549 18:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.549 [2024-11-26 18:56:38.822624] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:09:47.549 [2024-11-26 18:56:38.822841] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.808 [2024-11-26 18:56:39.018475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.808 [2024-11-26 18:56:39.158135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.065 [2024-11-26 18:56:39.372795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.065 [2024-11-26 18:56:39.372851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.631 [2024-11-26 18:56:39.833059] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.631 [2024-11-26 18:56:39.833266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.631 [2024-11-26 18:56:39.833390] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.631 [2024-11-26 18:56:39.833523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.631 [2024-11-26 18:56:39.833546] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.631 [2024-11-26 18:56:39.833568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.631 "name": "Existed_Raid", 00:09:48.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.631 "strip_size_kb": 0, 00:09:48.631 "state": "configuring", 00:09:48.631 "raid_level": "raid1", 00:09:48.631 "superblock": false, 00:09:48.631 "num_base_bdevs": 3, 00:09:48.631 "num_base_bdevs_discovered": 0, 00:09:48.631 "num_base_bdevs_operational": 3, 00:09:48.631 "base_bdevs_list": [ 00:09:48.631 { 00:09:48.631 "name": "BaseBdev1", 00:09:48.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.631 "is_configured": false, 00:09:48.631 "data_offset": 0, 00:09:48.631 "data_size": 0 00:09:48.631 }, 00:09:48.631 { 00:09:48.631 "name": "BaseBdev2", 00:09:48.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.631 "is_configured": false, 00:09:48.631 "data_offset": 0, 00:09:48.631 "data_size": 0 00:09:48.631 }, 00:09:48.631 { 00:09:48.631 "name": "BaseBdev3", 00:09:48.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.631 "is_configured": false, 00:09:48.631 "data_offset": 0, 00:09:48.631 "data_size": 0 00:09:48.631 } 00:09:48.631 ] 00:09:48.631 }' 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.631 18:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.199 [2024-11-26 18:56:40.321194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.199 [2024-11-26 18:56:40.321369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.199 [2024-11-26 18:56:40.329159] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:49.199 [2024-11-26 18:56:40.329342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:49.199 [2024-11-26 18:56:40.329460] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:49.199 [2024-11-26 18:56:40.329589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:49.199 [2024-11-26 18:56:40.329697] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:49.199 [2024-11-26 18:56:40.329729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.199 [2024-11-26 18:56:40.375518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.199 BaseBdev1 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.199 [ 00:09:49.199 { 00:09:49.199 "name": "BaseBdev1", 00:09:49.199 "aliases": [ 00:09:49.199 "20baa6b0-1174-4bce-98dc-0850bb611ec3" 00:09:49.199 ], 00:09:49.199 "product_name": "Malloc disk", 00:09:49.199 "block_size": 512, 00:09:49.199 "num_blocks": 65536, 00:09:49.199 "uuid": "20baa6b0-1174-4bce-98dc-0850bb611ec3", 00:09:49.199 "assigned_rate_limits": { 00:09:49.199 "rw_ios_per_sec": 0, 00:09:49.199 "rw_mbytes_per_sec": 0, 00:09:49.199 "r_mbytes_per_sec": 0, 00:09:49.199 "w_mbytes_per_sec": 0 00:09:49.199 }, 00:09:49.199 "claimed": true, 00:09:49.199 "claim_type": "exclusive_write", 00:09:49.199 "zoned": false, 00:09:49.199 "supported_io_types": { 00:09:49.199 "read": true, 00:09:49.199 "write": true, 00:09:49.199 "unmap": true, 00:09:49.199 "flush": true, 00:09:49.199 "reset": true, 00:09:49.199 "nvme_admin": false, 00:09:49.199 "nvme_io": false, 00:09:49.199 "nvme_io_md": false, 00:09:49.199 "write_zeroes": true, 00:09:49.199 "zcopy": true, 00:09:49.199 "get_zone_info": false, 00:09:49.199 "zone_management": false, 00:09:49.199 "zone_append": false, 00:09:49.199 "compare": false, 00:09:49.199 "compare_and_write": false, 00:09:49.199 "abort": true, 00:09:49.199 "seek_hole": false, 00:09:49.199 "seek_data": false, 00:09:49.199 "copy": true, 00:09:49.199 "nvme_iov_md": false 00:09:49.199 }, 00:09:49.199 "memory_domains": [ 00:09:49.199 { 00:09:49.199 "dma_device_id": "system", 00:09:49.199 "dma_device_type": 1 00:09:49.199 }, 00:09:49.199 { 00:09:49.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.199 "dma_device_type": 2 00:09:49.199 } 00:09:49.199 ], 00:09:49.199 "driver_specific": {} 00:09:49.199 } 00:09:49.199 ] 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.199 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.200 "name": "Existed_Raid", 00:09:49.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.200 "strip_size_kb": 0, 00:09:49.200 "state": "configuring", 00:09:49.200 "raid_level": "raid1", 00:09:49.200 "superblock": false, 00:09:49.200 "num_base_bdevs": 3, 00:09:49.200 "num_base_bdevs_discovered": 1, 00:09:49.200 "num_base_bdevs_operational": 3, 00:09:49.200 "base_bdevs_list": [ 00:09:49.200 { 00:09:49.200 "name": "BaseBdev1", 00:09:49.200 "uuid": "20baa6b0-1174-4bce-98dc-0850bb611ec3", 00:09:49.200 "is_configured": true, 00:09:49.200 "data_offset": 0, 00:09:49.200 "data_size": 65536 00:09:49.200 }, 00:09:49.200 { 00:09:49.200 "name": "BaseBdev2", 00:09:49.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.200 "is_configured": false, 00:09:49.200 "data_offset": 0, 00:09:49.200 "data_size": 0 00:09:49.200 }, 00:09:49.200 { 00:09:49.200 "name": "BaseBdev3", 00:09:49.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.200 "is_configured": false, 00:09:49.200 "data_offset": 0, 00:09:49.200 "data_size": 0 00:09:49.200 } 00:09:49.200 ] 00:09:49.200 }' 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.200 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.769 [2024-11-26 18:56:40.907719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.769 [2024-11-26 18:56:40.907939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.769 [2024-11-26 18:56:40.919776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.769 [2024-11-26 18:56:40.922660] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:49.769 [2024-11-26 18:56:40.922824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:49.769 [2024-11-26 18:56:40.922953] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:49.769 [2024-11-26 18:56:40.923013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.769 "name": "Existed_Raid", 00:09:49.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.769 "strip_size_kb": 0, 00:09:49.769 "state": "configuring", 00:09:49.769 "raid_level": "raid1", 00:09:49.769 "superblock": false, 00:09:49.769 "num_base_bdevs": 3, 00:09:49.769 "num_base_bdevs_discovered": 1, 00:09:49.769 "num_base_bdevs_operational": 3, 00:09:49.769 "base_bdevs_list": [ 00:09:49.769 { 00:09:49.769 "name": "BaseBdev1", 00:09:49.769 "uuid": "20baa6b0-1174-4bce-98dc-0850bb611ec3", 00:09:49.769 "is_configured": true, 00:09:49.769 "data_offset": 0, 00:09:49.769 "data_size": 65536 00:09:49.769 }, 00:09:49.769 { 00:09:49.769 "name": "BaseBdev2", 00:09:49.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.769 "is_configured": false, 00:09:49.769 "data_offset": 0, 00:09:49.769 "data_size": 0 00:09:49.769 }, 00:09:49.769 { 00:09:49.769 "name": "BaseBdev3", 00:09:49.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.769 "is_configured": false, 00:09:49.769 "data_offset": 0, 00:09:49.769 "data_size": 0 00:09:49.769 } 00:09:49.769 ] 00:09:49.769 }' 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.769 18:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.353 [2024-11-26 18:56:41.514841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.353 BaseBdev2 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.353 [ 00:09:50.353 { 00:09:50.353 "name": "BaseBdev2", 00:09:50.353 "aliases": [ 00:09:50.353 "33390aac-1915-478d-b58f-7bcd4a1ea48c" 00:09:50.353 ], 00:09:50.353 "product_name": "Malloc disk", 00:09:50.353 "block_size": 512, 00:09:50.353 "num_blocks": 65536, 00:09:50.353 "uuid": "33390aac-1915-478d-b58f-7bcd4a1ea48c", 00:09:50.353 "assigned_rate_limits": { 00:09:50.353 "rw_ios_per_sec": 0, 00:09:50.353 "rw_mbytes_per_sec": 0, 00:09:50.353 "r_mbytes_per_sec": 0, 00:09:50.353 "w_mbytes_per_sec": 0 00:09:50.353 }, 00:09:50.353 "claimed": true, 00:09:50.353 "claim_type": "exclusive_write", 00:09:50.353 "zoned": false, 00:09:50.353 "supported_io_types": { 00:09:50.353 "read": true, 00:09:50.353 "write": true, 00:09:50.353 "unmap": true, 00:09:50.353 "flush": true, 00:09:50.353 "reset": true, 00:09:50.353 "nvme_admin": false, 00:09:50.353 "nvme_io": false, 00:09:50.353 "nvme_io_md": false, 00:09:50.353 "write_zeroes": true, 00:09:50.353 "zcopy": true, 00:09:50.353 "get_zone_info": false, 00:09:50.353 "zone_management": false, 00:09:50.353 "zone_append": false, 00:09:50.353 "compare": false, 00:09:50.353 "compare_and_write": false, 00:09:50.353 "abort": true, 00:09:50.353 "seek_hole": false, 00:09:50.353 "seek_data": false, 00:09:50.353 "copy": true, 00:09:50.353 "nvme_iov_md": false 00:09:50.353 }, 00:09:50.353 "memory_domains": [ 00:09:50.353 { 00:09:50.353 "dma_device_id": "system", 00:09:50.353 "dma_device_type": 1 00:09:50.353 }, 00:09:50.353 { 00:09:50.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.353 "dma_device_type": 2 00:09:50.353 } 00:09:50.353 ], 00:09:50.353 "driver_specific": {} 00:09:50.353 } 00:09:50.353 ] 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.353 "name": "Existed_Raid", 00:09:50.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.353 "strip_size_kb": 0, 00:09:50.353 "state": "configuring", 00:09:50.353 "raid_level": "raid1", 00:09:50.353 "superblock": false, 00:09:50.353 "num_base_bdevs": 3, 00:09:50.353 "num_base_bdevs_discovered": 2, 00:09:50.353 "num_base_bdevs_operational": 3, 00:09:50.353 "base_bdevs_list": [ 00:09:50.353 { 00:09:50.353 "name": "BaseBdev1", 00:09:50.353 "uuid": "20baa6b0-1174-4bce-98dc-0850bb611ec3", 00:09:50.353 "is_configured": true, 00:09:50.353 "data_offset": 0, 00:09:50.353 "data_size": 65536 00:09:50.353 }, 00:09:50.353 { 00:09:50.353 "name": "BaseBdev2", 00:09:50.353 "uuid": "33390aac-1915-478d-b58f-7bcd4a1ea48c", 00:09:50.353 "is_configured": true, 00:09:50.353 "data_offset": 0, 00:09:50.353 "data_size": 65536 00:09:50.353 }, 00:09:50.353 { 00:09:50.353 "name": "BaseBdev3", 00:09:50.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.353 "is_configured": false, 00:09:50.353 "data_offset": 0, 00:09:50.353 "data_size": 0 00:09:50.353 } 00:09:50.353 ] 00:09:50.353 }' 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.353 18:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.927 [2024-11-26 18:56:42.166468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.927 [2024-11-26 18:56:42.166723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:50.927 [2024-11-26 18:56:42.166757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:50.927 [2024-11-26 18:56:42.167158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:50.927 [2024-11-26 18:56:42.167402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:50.927 [2024-11-26 18:56:42.167419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:50.927 [2024-11-26 18:56:42.167851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.927 BaseBdev3 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.927 [ 00:09:50.927 { 00:09:50.927 "name": "BaseBdev3", 00:09:50.927 "aliases": [ 00:09:50.927 "92397d8b-1c6a-4ad7-bb3f-c2cd0cfb7a8a" 00:09:50.927 ], 00:09:50.927 "product_name": "Malloc disk", 00:09:50.927 "block_size": 512, 00:09:50.927 "num_blocks": 65536, 00:09:50.927 "uuid": "92397d8b-1c6a-4ad7-bb3f-c2cd0cfb7a8a", 00:09:50.927 "assigned_rate_limits": { 00:09:50.927 "rw_ios_per_sec": 0, 00:09:50.927 "rw_mbytes_per_sec": 0, 00:09:50.927 "r_mbytes_per_sec": 0, 00:09:50.927 "w_mbytes_per_sec": 0 00:09:50.927 }, 00:09:50.927 "claimed": true, 00:09:50.927 "claim_type": "exclusive_write", 00:09:50.927 "zoned": false, 00:09:50.927 "supported_io_types": { 00:09:50.927 "read": true, 00:09:50.927 "write": true, 00:09:50.927 "unmap": true, 00:09:50.927 "flush": true, 00:09:50.927 "reset": true, 00:09:50.927 "nvme_admin": false, 00:09:50.927 "nvme_io": false, 00:09:50.927 "nvme_io_md": false, 00:09:50.927 "write_zeroes": true, 00:09:50.927 "zcopy": true, 00:09:50.927 "get_zone_info": false, 00:09:50.927 "zone_management": false, 00:09:50.927 "zone_append": false, 00:09:50.927 "compare": false, 00:09:50.927 "compare_and_write": false, 00:09:50.927 "abort": true, 00:09:50.927 "seek_hole": false, 00:09:50.927 "seek_data": false, 00:09:50.927 "copy": true, 00:09:50.927 "nvme_iov_md": false 00:09:50.927 }, 00:09:50.927 "memory_domains": [ 00:09:50.927 { 00:09:50.927 "dma_device_id": "system", 00:09:50.927 "dma_device_type": 1 00:09:50.927 }, 00:09:50.927 { 00:09:50.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.927 "dma_device_type": 2 00:09:50.927 } 00:09:50.927 ], 00:09:50.927 "driver_specific": {} 00:09:50.927 } 00:09:50.927 ] 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.927 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.928 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.928 "name": "Existed_Raid", 00:09:50.928 "uuid": "cd5da7ec-4c64-44c6-b446-d4e386ceccc8", 00:09:50.928 "strip_size_kb": 0, 00:09:50.928 "state": "online", 00:09:50.928 "raid_level": "raid1", 00:09:50.928 "superblock": false, 00:09:50.928 "num_base_bdevs": 3, 00:09:50.928 "num_base_bdevs_discovered": 3, 00:09:50.928 "num_base_bdevs_operational": 3, 00:09:50.928 "base_bdevs_list": [ 00:09:50.928 { 00:09:50.928 "name": "BaseBdev1", 00:09:50.928 "uuid": "20baa6b0-1174-4bce-98dc-0850bb611ec3", 00:09:50.928 "is_configured": true, 00:09:50.928 "data_offset": 0, 00:09:50.928 "data_size": 65536 00:09:50.928 }, 00:09:50.928 { 00:09:50.928 "name": "BaseBdev2", 00:09:50.928 "uuid": "33390aac-1915-478d-b58f-7bcd4a1ea48c", 00:09:50.928 "is_configured": true, 00:09:50.928 "data_offset": 0, 00:09:50.928 "data_size": 65536 00:09:50.928 }, 00:09:50.928 { 00:09:50.928 "name": "BaseBdev3", 00:09:50.928 "uuid": "92397d8b-1c6a-4ad7-bb3f-c2cd0cfb7a8a", 00:09:50.928 "is_configured": true, 00:09:50.928 "data_offset": 0, 00:09:50.928 "data_size": 65536 00:09:50.928 } 00:09:50.928 ] 00:09:50.928 }' 00:09:50.928 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.928 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.496 [2024-11-26 18:56:42.763122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.496 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.496 "name": "Existed_Raid", 00:09:51.496 "aliases": [ 00:09:51.496 "cd5da7ec-4c64-44c6-b446-d4e386ceccc8" 00:09:51.496 ], 00:09:51.496 "product_name": "Raid Volume", 00:09:51.496 "block_size": 512, 00:09:51.496 "num_blocks": 65536, 00:09:51.496 "uuid": "cd5da7ec-4c64-44c6-b446-d4e386ceccc8", 00:09:51.496 "assigned_rate_limits": { 00:09:51.496 "rw_ios_per_sec": 0, 00:09:51.496 "rw_mbytes_per_sec": 0, 00:09:51.496 "r_mbytes_per_sec": 0, 00:09:51.496 "w_mbytes_per_sec": 0 00:09:51.496 }, 00:09:51.496 "claimed": false, 00:09:51.496 "zoned": false, 00:09:51.496 "supported_io_types": { 00:09:51.496 "read": true, 00:09:51.496 "write": true, 00:09:51.496 "unmap": false, 00:09:51.496 "flush": false, 00:09:51.496 "reset": true, 00:09:51.496 "nvme_admin": false, 00:09:51.496 "nvme_io": false, 00:09:51.496 "nvme_io_md": false, 00:09:51.496 "write_zeroes": true, 00:09:51.496 "zcopy": false, 00:09:51.496 "get_zone_info": false, 00:09:51.496 "zone_management": false, 00:09:51.496 "zone_append": false, 00:09:51.496 "compare": false, 00:09:51.496 "compare_and_write": false, 00:09:51.496 "abort": false, 00:09:51.497 "seek_hole": false, 00:09:51.497 "seek_data": false, 00:09:51.497 "copy": false, 00:09:51.497 "nvme_iov_md": false 00:09:51.497 }, 00:09:51.497 "memory_domains": [ 00:09:51.497 { 00:09:51.497 "dma_device_id": "system", 00:09:51.497 "dma_device_type": 1 00:09:51.497 }, 00:09:51.497 { 00:09:51.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.497 "dma_device_type": 2 00:09:51.497 }, 00:09:51.497 { 00:09:51.497 "dma_device_id": "system", 00:09:51.497 "dma_device_type": 1 00:09:51.497 }, 00:09:51.497 { 00:09:51.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.497 "dma_device_type": 2 00:09:51.497 }, 00:09:51.497 { 00:09:51.497 "dma_device_id": "system", 00:09:51.497 "dma_device_type": 1 00:09:51.497 }, 00:09:51.497 { 00:09:51.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.497 "dma_device_type": 2 00:09:51.497 } 00:09:51.497 ], 00:09:51.497 "driver_specific": { 00:09:51.497 "raid": { 00:09:51.497 "uuid": "cd5da7ec-4c64-44c6-b446-d4e386ceccc8", 00:09:51.497 "strip_size_kb": 0, 00:09:51.497 "state": "online", 00:09:51.497 "raid_level": "raid1", 00:09:51.497 "superblock": false, 00:09:51.497 "num_base_bdevs": 3, 00:09:51.497 "num_base_bdevs_discovered": 3, 00:09:51.497 "num_base_bdevs_operational": 3, 00:09:51.497 "base_bdevs_list": [ 00:09:51.497 { 00:09:51.497 "name": "BaseBdev1", 00:09:51.497 "uuid": "20baa6b0-1174-4bce-98dc-0850bb611ec3", 00:09:51.497 "is_configured": true, 00:09:51.497 "data_offset": 0, 00:09:51.497 "data_size": 65536 00:09:51.497 }, 00:09:51.497 { 00:09:51.497 "name": "BaseBdev2", 00:09:51.497 "uuid": "33390aac-1915-478d-b58f-7bcd4a1ea48c", 00:09:51.497 "is_configured": true, 00:09:51.497 "data_offset": 0, 00:09:51.497 "data_size": 65536 00:09:51.497 }, 00:09:51.497 { 00:09:51.497 "name": "BaseBdev3", 00:09:51.497 "uuid": "92397d8b-1c6a-4ad7-bb3f-c2cd0cfb7a8a", 00:09:51.497 "is_configured": true, 00:09:51.497 "data_offset": 0, 00:09:51.497 "data_size": 65536 00:09:51.497 } 00:09:51.497 ] 00:09:51.497 } 00:09:51.497 } 00:09:51.497 }' 00:09:51.497 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.497 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:51.497 BaseBdev2 00:09:51.497 BaseBdev3' 00:09:51.497 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.755 18:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.755 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.755 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.755 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.755 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:51.755 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.756 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.756 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.756 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.756 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.756 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.756 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:51.756 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.756 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.756 [2024-11-26 18:56:43.054823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.015 "name": "Existed_Raid", 00:09:52.015 "uuid": "cd5da7ec-4c64-44c6-b446-d4e386ceccc8", 00:09:52.015 "strip_size_kb": 0, 00:09:52.015 "state": "online", 00:09:52.015 "raid_level": "raid1", 00:09:52.015 "superblock": false, 00:09:52.015 "num_base_bdevs": 3, 00:09:52.015 "num_base_bdevs_discovered": 2, 00:09:52.015 "num_base_bdevs_operational": 2, 00:09:52.015 "base_bdevs_list": [ 00:09:52.015 { 00:09:52.015 "name": null, 00:09:52.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.015 "is_configured": false, 00:09:52.015 "data_offset": 0, 00:09:52.015 "data_size": 65536 00:09:52.015 }, 00:09:52.015 { 00:09:52.015 "name": "BaseBdev2", 00:09:52.015 "uuid": "33390aac-1915-478d-b58f-7bcd4a1ea48c", 00:09:52.015 "is_configured": true, 00:09:52.015 "data_offset": 0, 00:09:52.015 "data_size": 65536 00:09:52.015 }, 00:09:52.015 { 00:09:52.015 "name": "BaseBdev3", 00:09:52.015 "uuid": "92397d8b-1c6a-4ad7-bb3f-c2cd0cfb7a8a", 00:09:52.015 "is_configured": true, 00:09:52.015 "data_offset": 0, 00:09:52.015 "data_size": 65536 00:09:52.015 } 00:09:52.015 ] 00:09:52.015 }' 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.015 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.584 [2024-11-26 18:56:43.725560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.584 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.584 [2024-11-26 18:56:43.872674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:52.584 [2024-11-26 18:56:43.872968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.844 [2024-11-26 18:56:43.959757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.844 [2024-11-26 18:56:43.960014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.844 [2024-11-26 18:56:43.960050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:52.844 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.844 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:52.844 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:52.844 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.844 18:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:52.844 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.844 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.844 18:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.844 BaseBdev2 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.844 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.844 [ 00:09:52.844 { 00:09:52.844 "name": "BaseBdev2", 00:09:52.844 "aliases": [ 00:09:52.844 "3b0b54e7-4658-458c-bfcb-d33ea42f589c" 00:09:52.844 ], 00:09:52.844 "product_name": "Malloc disk", 00:09:52.844 "block_size": 512, 00:09:52.844 "num_blocks": 65536, 00:09:52.844 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:52.844 "assigned_rate_limits": { 00:09:52.844 "rw_ios_per_sec": 0, 00:09:52.844 "rw_mbytes_per_sec": 0, 00:09:52.844 "r_mbytes_per_sec": 0, 00:09:52.844 "w_mbytes_per_sec": 0 00:09:52.844 }, 00:09:52.844 "claimed": false, 00:09:52.844 "zoned": false, 00:09:52.844 "supported_io_types": { 00:09:52.844 "read": true, 00:09:52.844 "write": true, 00:09:52.844 "unmap": true, 00:09:52.844 "flush": true, 00:09:52.844 "reset": true, 00:09:52.844 "nvme_admin": false, 00:09:52.844 "nvme_io": false, 00:09:52.844 "nvme_io_md": false, 00:09:52.844 "write_zeroes": true, 00:09:52.844 "zcopy": true, 00:09:52.844 "get_zone_info": false, 00:09:52.844 "zone_management": false, 00:09:52.844 "zone_append": false, 00:09:52.844 "compare": false, 00:09:52.844 "compare_and_write": false, 00:09:52.844 "abort": true, 00:09:52.844 "seek_hole": false, 00:09:52.844 "seek_data": false, 00:09:52.844 "copy": true, 00:09:52.844 "nvme_iov_md": false 00:09:52.844 }, 00:09:52.844 "memory_domains": [ 00:09:52.844 { 00:09:52.845 "dma_device_id": "system", 00:09:52.845 "dma_device_type": 1 00:09:52.845 }, 00:09:52.845 { 00:09:52.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.845 "dma_device_type": 2 00:09:52.845 } 00:09:52.845 ], 00:09:52.845 "driver_specific": {} 00:09:52.845 } 00:09:52.845 ] 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.845 BaseBdev3 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.845 [ 00:09:52.845 { 00:09:52.845 "name": "BaseBdev3", 00:09:52.845 "aliases": [ 00:09:52.845 "73e5d78c-a55f-45e2-9222-a7b77b8bf77b" 00:09:52.845 ], 00:09:52.845 "product_name": "Malloc disk", 00:09:52.845 "block_size": 512, 00:09:52.845 "num_blocks": 65536, 00:09:52.845 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:52.845 "assigned_rate_limits": { 00:09:52.845 "rw_ios_per_sec": 0, 00:09:52.845 "rw_mbytes_per_sec": 0, 00:09:52.845 "r_mbytes_per_sec": 0, 00:09:52.845 "w_mbytes_per_sec": 0 00:09:52.845 }, 00:09:52.845 "claimed": false, 00:09:52.845 "zoned": false, 00:09:52.845 "supported_io_types": { 00:09:52.845 "read": true, 00:09:52.845 "write": true, 00:09:52.845 "unmap": true, 00:09:52.845 "flush": true, 00:09:52.845 "reset": true, 00:09:52.845 "nvme_admin": false, 00:09:52.845 "nvme_io": false, 00:09:52.845 "nvme_io_md": false, 00:09:52.845 "write_zeroes": true, 00:09:52.845 "zcopy": true, 00:09:52.845 "get_zone_info": false, 00:09:52.845 "zone_management": false, 00:09:52.845 "zone_append": false, 00:09:52.845 "compare": false, 00:09:52.845 "compare_and_write": false, 00:09:52.845 "abort": true, 00:09:52.845 "seek_hole": false, 00:09:52.845 "seek_data": false, 00:09:52.845 "copy": true, 00:09:52.845 "nvme_iov_md": false 00:09:52.845 }, 00:09:52.845 "memory_domains": [ 00:09:52.845 { 00:09:52.845 "dma_device_id": "system", 00:09:52.845 "dma_device_type": 1 00:09:52.845 }, 00:09:52.845 { 00:09:52.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.845 "dma_device_type": 2 00:09:52.845 } 00:09:52.845 ], 00:09:52.845 "driver_specific": {} 00:09:52.845 } 00:09:52.845 ] 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.845 [2024-11-26 18:56:44.183622] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.845 [2024-11-26 18:56:44.183807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.845 [2024-11-26 18:56:44.183955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.845 [2024-11-26 18:56:44.186503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.845 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.104 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.104 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.104 "name": "Existed_Raid", 00:09:53.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.104 "strip_size_kb": 0, 00:09:53.104 "state": "configuring", 00:09:53.104 "raid_level": "raid1", 00:09:53.104 "superblock": false, 00:09:53.104 "num_base_bdevs": 3, 00:09:53.104 "num_base_bdevs_discovered": 2, 00:09:53.104 "num_base_bdevs_operational": 3, 00:09:53.104 "base_bdevs_list": [ 00:09:53.104 { 00:09:53.104 "name": "BaseBdev1", 00:09:53.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.104 "is_configured": false, 00:09:53.104 "data_offset": 0, 00:09:53.104 "data_size": 0 00:09:53.104 }, 00:09:53.104 { 00:09:53.104 "name": "BaseBdev2", 00:09:53.104 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:53.104 "is_configured": true, 00:09:53.104 "data_offset": 0, 00:09:53.104 "data_size": 65536 00:09:53.104 }, 00:09:53.104 { 00:09:53.104 "name": "BaseBdev3", 00:09:53.104 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:53.104 "is_configured": true, 00:09:53.104 "data_offset": 0, 00:09:53.104 "data_size": 65536 00:09:53.104 } 00:09:53.104 ] 00:09:53.104 }' 00:09:53.104 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.104 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.363 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:53.363 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.363 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.363 [2024-11-26 18:56:44.719830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:53.363 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.363 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.363 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.363 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.363 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.364 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.364 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.364 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.364 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.364 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.364 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.622 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.622 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.622 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.622 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.622 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.622 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.622 "name": "Existed_Raid", 00:09:53.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.622 "strip_size_kb": 0, 00:09:53.622 "state": "configuring", 00:09:53.622 "raid_level": "raid1", 00:09:53.622 "superblock": false, 00:09:53.622 "num_base_bdevs": 3, 00:09:53.622 "num_base_bdevs_discovered": 1, 00:09:53.622 "num_base_bdevs_operational": 3, 00:09:53.622 "base_bdevs_list": [ 00:09:53.622 { 00:09:53.622 "name": "BaseBdev1", 00:09:53.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.622 "is_configured": false, 00:09:53.622 "data_offset": 0, 00:09:53.622 "data_size": 0 00:09:53.622 }, 00:09:53.622 { 00:09:53.622 "name": null, 00:09:53.622 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:53.622 "is_configured": false, 00:09:53.622 "data_offset": 0, 00:09:53.622 "data_size": 65536 00:09:53.622 }, 00:09:53.622 { 00:09:53.622 "name": "BaseBdev3", 00:09:53.622 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:53.622 "is_configured": true, 00:09:53.622 "data_offset": 0, 00:09:53.622 "data_size": 65536 00:09:53.622 } 00:09:53.622 ] 00:09:53.622 }' 00:09:53.622 18:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.622 18:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.190 [2024-11-26 18:56:45.354900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.190 BaseBdev1 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.190 [ 00:09:54.190 { 00:09:54.190 "name": "BaseBdev1", 00:09:54.190 "aliases": [ 00:09:54.190 "52086c58-81f3-4ae6-be8d-c895de45aa62" 00:09:54.190 ], 00:09:54.190 "product_name": "Malloc disk", 00:09:54.190 "block_size": 512, 00:09:54.190 "num_blocks": 65536, 00:09:54.190 "uuid": "52086c58-81f3-4ae6-be8d-c895de45aa62", 00:09:54.190 "assigned_rate_limits": { 00:09:54.190 "rw_ios_per_sec": 0, 00:09:54.190 "rw_mbytes_per_sec": 0, 00:09:54.190 "r_mbytes_per_sec": 0, 00:09:54.190 "w_mbytes_per_sec": 0 00:09:54.190 }, 00:09:54.190 "claimed": true, 00:09:54.190 "claim_type": "exclusive_write", 00:09:54.190 "zoned": false, 00:09:54.190 "supported_io_types": { 00:09:54.190 "read": true, 00:09:54.190 "write": true, 00:09:54.190 "unmap": true, 00:09:54.190 "flush": true, 00:09:54.190 "reset": true, 00:09:54.190 "nvme_admin": false, 00:09:54.190 "nvme_io": false, 00:09:54.190 "nvme_io_md": false, 00:09:54.190 "write_zeroes": true, 00:09:54.190 "zcopy": true, 00:09:54.190 "get_zone_info": false, 00:09:54.190 "zone_management": false, 00:09:54.190 "zone_append": false, 00:09:54.190 "compare": false, 00:09:54.190 "compare_and_write": false, 00:09:54.190 "abort": true, 00:09:54.190 "seek_hole": false, 00:09:54.190 "seek_data": false, 00:09:54.190 "copy": true, 00:09:54.190 "nvme_iov_md": false 00:09:54.190 }, 00:09:54.190 "memory_domains": [ 00:09:54.190 { 00:09:54.190 "dma_device_id": "system", 00:09:54.190 "dma_device_type": 1 00:09:54.190 }, 00:09:54.190 { 00:09:54.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.190 "dma_device_type": 2 00:09:54.190 } 00:09:54.190 ], 00:09:54.190 "driver_specific": {} 00:09:54.190 } 00:09:54.190 ] 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.190 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.190 "name": "Existed_Raid", 00:09:54.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.190 "strip_size_kb": 0, 00:09:54.190 "state": "configuring", 00:09:54.190 "raid_level": "raid1", 00:09:54.190 "superblock": false, 00:09:54.191 "num_base_bdevs": 3, 00:09:54.191 "num_base_bdevs_discovered": 2, 00:09:54.191 "num_base_bdevs_operational": 3, 00:09:54.191 "base_bdevs_list": [ 00:09:54.191 { 00:09:54.191 "name": "BaseBdev1", 00:09:54.191 "uuid": "52086c58-81f3-4ae6-be8d-c895de45aa62", 00:09:54.191 "is_configured": true, 00:09:54.191 "data_offset": 0, 00:09:54.191 "data_size": 65536 00:09:54.191 }, 00:09:54.191 { 00:09:54.191 "name": null, 00:09:54.191 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:54.191 "is_configured": false, 00:09:54.191 "data_offset": 0, 00:09:54.191 "data_size": 65536 00:09:54.191 }, 00:09:54.191 { 00:09:54.191 "name": "BaseBdev3", 00:09:54.191 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:54.191 "is_configured": true, 00:09:54.191 "data_offset": 0, 00:09:54.191 "data_size": 65536 00:09:54.191 } 00:09:54.191 ] 00:09:54.191 }' 00:09:54.191 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.191 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.758 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:54.758 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.758 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.758 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.758 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.758 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.759 [2024-11-26 18:56:45.975155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.759 18:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.759 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.759 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.759 "name": "Existed_Raid", 00:09:54.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.759 "strip_size_kb": 0, 00:09:54.759 "state": "configuring", 00:09:54.759 "raid_level": "raid1", 00:09:54.759 "superblock": false, 00:09:54.759 "num_base_bdevs": 3, 00:09:54.759 "num_base_bdevs_discovered": 1, 00:09:54.759 "num_base_bdevs_operational": 3, 00:09:54.759 "base_bdevs_list": [ 00:09:54.759 { 00:09:54.759 "name": "BaseBdev1", 00:09:54.759 "uuid": "52086c58-81f3-4ae6-be8d-c895de45aa62", 00:09:54.759 "is_configured": true, 00:09:54.759 "data_offset": 0, 00:09:54.759 "data_size": 65536 00:09:54.759 }, 00:09:54.759 { 00:09:54.759 "name": null, 00:09:54.759 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:54.759 "is_configured": false, 00:09:54.759 "data_offset": 0, 00:09:54.759 "data_size": 65536 00:09:54.759 }, 00:09:54.759 { 00:09:54.759 "name": null, 00:09:54.759 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:54.759 "is_configured": false, 00:09:54.759 "data_offset": 0, 00:09:54.759 "data_size": 65536 00:09:54.759 } 00:09:54.759 ] 00:09:54.759 }' 00:09:54.759 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.759 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.326 [2024-11-26 18:56:46.535416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.326 "name": "Existed_Raid", 00:09:55.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.326 "strip_size_kb": 0, 00:09:55.326 "state": "configuring", 00:09:55.326 "raid_level": "raid1", 00:09:55.326 "superblock": false, 00:09:55.326 "num_base_bdevs": 3, 00:09:55.326 "num_base_bdevs_discovered": 2, 00:09:55.326 "num_base_bdevs_operational": 3, 00:09:55.326 "base_bdevs_list": [ 00:09:55.326 { 00:09:55.326 "name": "BaseBdev1", 00:09:55.326 "uuid": "52086c58-81f3-4ae6-be8d-c895de45aa62", 00:09:55.326 "is_configured": true, 00:09:55.326 "data_offset": 0, 00:09:55.326 "data_size": 65536 00:09:55.326 }, 00:09:55.326 { 00:09:55.326 "name": null, 00:09:55.326 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:55.326 "is_configured": false, 00:09:55.326 "data_offset": 0, 00:09:55.326 "data_size": 65536 00:09:55.326 }, 00:09:55.326 { 00:09:55.326 "name": "BaseBdev3", 00:09:55.326 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:55.326 "is_configured": true, 00:09:55.326 "data_offset": 0, 00:09:55.326 "data_size": 65536 00:09:55.326 } 00:09:55.326 ] 00:09:55.326 }' 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.326 18:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.894 [2024-11-26 18:56:47.111546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.894 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.894 "name": "Existed_Raid", 00:09:55.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.894 "strip_size_kb": 0, 00:09:55.894 "state": "configuring", 00:09:55.894 "raid_level": "raid1", 00:09:55.894 "superblock": false, 00:09:55.894 "num_base_bdevs": 3, 00:09:55.894 "num_base_bdevs_discovered": 1, 00:09:55.894 "num_base_bdevs_operational": 3, 00:09:55.894 "base_bdevs_list": [ 00:09:55.894 { 00:09:55.894 "name": null, 00:09:55.894 "uuid": "52086c58-81f3-4ae6-be8d-c895de45aa62", 00:09:55.894 "is_configured": false, 00:09:55.894 "data_offset": 0, 00:09:55.894 "data_size": 65536 00:09:55.894 }, 00:09:55.894 { 00:09:55.894 "name": null, 00:09:55.894 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:55.894 "is_configured": false, 00:09:55.894 "data_offset": 0, 00:09:55.894 "data_size": 65536 00:09:55.894 }, 00:09:55.894 { 00:09:55.894 "name": "BaseBdev3", 00:09:55.894 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:55.895 "is_configured": true, 00:09:55.895 "data_offset": 0, 00:09:55.895 "data_size": 65536 00:09:55.895 } 00:09:55.895 ] 00:09:55.895 }' 00:09:55.895 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.895 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.462 [2024-11-26 18:56:47.750163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.462 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.462 "name": "Existed_Raid", 00:09:56.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.462 "strip_size_kb": 0, 00:09:56.463 "state": "configuring", 00:09:56.463 "raid_level": "raid1", 00:09:56.463 "superblock": false, 00:09:56.463 "num_base_bdevs": 3, 00:09:56.463 "num_base_bdevs_discovered": 2, 00:09:56.463 "num_base_bdevs_operational": 3, 00:09:56.463 "base_bdevs_list": [ 00:09:56.463 { 00:09:56.463 "name": null, 00:09:56.463 "uuid": "52086c58-81f3-4ae6-be8d-c895de45aa62", 00:09:56.463 "is_configured": false, 00:09:56.463 "data_offset": 0, 00:09:56.463 "data_size": 65536 00:09:56.463 }, 00:09:56.463 { 00:09:56.463 "name": "BaseBdev2", 00:09:56.463 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:56.463 "is_configured": true, 00:09:56.463 "data_offset": 0, 00:09:56.463 "data_size": 65536 00:09:56.463 }, 00:09:56.463 { 00:09:56.463 "name": "BaseBdev3", 00:09:56.463 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:56.463 "is_configured": true, 00:09:56.463 "data_offset": 0, 00:09:56.463 "data_size": 65536 00:09:56.463 } 00:09:56.463 ] 00:09:56.463 }' 00:09:56.463 18:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.463 18:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 52086c58-81f3-4ae6-be8d-c895de45aa62 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.031 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.289 [2024-11-26 18:56:48.424618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:57.289 [2024-11-26 18:56:48.424911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:57.289 [2024-11-26 18:56:48.424935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:57.289 [2024-11-26 18:56:48.425261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:57.289 [2024-11-26 18:56:48.425458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:57.289 [2024-11-26 18:56:48.425479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:57.289 [2024-11-26 18:56:48.425781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.289 NewBaseBdev 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.289 [ 00:09:57.289 { 00:09:57.289 "name": "NewBaseBdev", 00:09:57.289 "aliases": [ 00:09:57.289 "52086c58-81f3-4ae6-be8d-c895de45aa62" 00:09:57.289 ], 00:09:57.289 "product_name": "Malloc disk", 00:09:57.289 "block_size": 512, 00:09:57.289 "num_blocks": 65536, 00:09:57.289 "uuid": "52086c58-81f3-4ae6-be8d-c895de45aa62", 00:09:57.289 "assigned_rate_limits": { 00:09:57.289 "rw_ios_per_sec": 0, 00:09:57.289 "rw_mbytes_per_sec": 0, 00:09:57.289 "r_mbytes_per_sec": 0, 00:09:57.289 "w_mbytes_per_sec": 0 00:09:57.289 }, 00:09:57.289 "claimed": true, 00:09:57.289 "claim_type": "exclusive_write", 00:09:57.289 "zoned": false, 00:09:57.289 "supported_io_types": { 00:09:57.289 "read": true, 00:09:57.289 "write": true, 00:09:57.289 "unmap": true, 00:09:57.289 "flush": true, 00:09:57.289 "reset": true, 00:09:57.289 "nvme_admin": false, 00:09:57.289 "nvme_io": false, 00:09:57.289 "nvme_io_md": false, 00:09:57.289 "write_zeroes": true, 00:09:57.289 "zcopy": true, 00:09:57.289 "get_zone_info": false, 00:09:57.289 "zone_management": false, 00:09:57.289 "zone_append": false, 00:09:57.289 "compare": false, 00:09:57.289 "compare_and_write": false, 00:09:57.289 "abort": true, 00:09:57.289 "seek_hole": false, 00:09:57.289 "seek_data": false, 00:09:57.289 "copy": true, 00:09:57.289 "nvme_iov_md": false 00:09:57.289 }, 00:09:57.289 "memory_domains": [ 00:09:57.289 { 00:09:57.289 "dma_device_id": "system", 00:09:57.289 "dma_device_type": 1 00:09:57.289 }, 00:09:57.289 { 00:09:57.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.289 "dma_device_type": 2 00:09:57.289 } 00:09:57.289 ], 00:09:57.289 "driver_specific": {} 00:09:57.289 } 00:09:57.289 ] 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.289 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.289 "name": "Existed_Raid", 00:09:57.289 "uuid": "913763aa-22a5-4df9-b7be-b6919fda4074", 00:09:57.289 "strip_size_kb": 0, 00:09:57.289 "state": "online", 00:09:57.289 "raid_level": "raid1", 00:09:57.289 "superblock": false, 00:09:57.289 "num_base_bdevs": 3, 00:09:57.289 "num_base_bdevs_discovered": 3, 00:09:57.289 "num_base_bdevs_operational": 3, 00:09:57.289 "base_bdevs_list": [ 00:09:57.289 { 00:09:57.289 "name": "NewBaseBdev", 00:09:57.289 "uuid": "52086c58-81f3-4ae6-be8d-c895de45aa62", 00:09:57.289 "is_configured": true, 00:09:57.289 "data_offset": 0, 00:09:57.289 "data_size": 65536 00:09:57.289 }, 00:09:57.289 { 00:09:57.289 "name": "BaseBdev2", 00:09:57.289 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:57.289 "is_configured": true, 00:09:57.289 "data_offset": 0, 00:09:57.289 "data_size": 65536 00:09:57.289 }, 00:09:57.289 { 00:09:57.289 "name": "BaseBdev3", 00:09:57.290 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:57.290 "is_configured": true, 00:09:57.290 "data_offset": 0, 00:09:57.290 "data_size": 65536 00:09:57.290 } 00:09:57.290 ] 00:09:57.290 }' 00:09:57.290 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.290 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.909 [2024-11-26 18:56:48.977229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.909 18:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.909 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.909 "name": "Existed_Raid", 00:09:57.909 "aliases": [ 00:09:57.909 "913763aa-22a5-4df9-b7be-b6919fda4074" 00:09:57.909 ], 00:09:57.909 "product_name": "Raid Volume", 00:09:57.909 "block_size": 512, 00:09:57.909 "num_blocks": 65536, 00:09:57.909 "uuid": "913763aa-22a5-4df9-b7be-b6919fda4074", 00:09:57.909 "assigned_rate_limits": { 00:09:57.909 "rw_ios_per_sec": 0, 00:09:57.909 "rw_mbytes_per_sec": 0, 00:09:57.909 "r_mbytes_per_sec": 0, 00:09:57.909 "w_mbytes_per_sec": 0 00:09:57.909 }, 00:09:57.909 "claimed": false, 00:09:57.909 "zoned": false, 00:09:57.909 "supported_io_types": { 00:09:57.909 "read": true, 00:09:57.909 "write": true, 00:09:57.909 "unmap": false, 00:09:57.909 "flush": false, 00:09:57.909 "reset": true, 00:09:57.910 "nvme_admin": false, 00:09:57.910 "nvme_io": false, 00:09:57.910 "nvme_io_md": false, 00:09:57.910 "write_zeroes": true, 00:09:57.910 "zcopy": false, 00:09:57.910 "get_zone_info": false, 00:09:57.910 "zone_management": false, 00:09:57.910 "zone_append": false, 00:09:57.910 "compare": false, 00:09:57.910 "compare_and_write": false, 00:09:57.910 "abort": false, 00:09:57.910 "seek_hole": false, 00:09:57.910 "seek_data": false, 00:09:57.910 "copy": false, 00:09:57.910 "nvme_iov_md": false 00:09:57.910 }, 00:09:57.910 "memory_domains": [ 00:09:57.910 { 00:09:57.910 "dma_device_id": "system", 00:09:57.910 "dma_device_type": 1 00:09:57.910 }, 00:09:57.910 { 00:09:57.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.910 "dma_device_type": 2 00:09:57.910 }, 00:09:57.910 { 00:09:57.910 "dma_device_id": "system", 00:09:57.910 "dma_device_type": 1 00:09:57.910 }, 00:09:57.910 { 00:09:57.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.910 "dma_device_type": 2 00:09:57.910 }, 00:09:57.910 { 00:09:57.910 "dma_device_id": "system", 00:09:57.910 "dma_device_type": 1 00:09:57.910 }, 00:09:57.910 { 00:09:57.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.910 "dma_device_type": 2 00:09:57.910 } 00:09:57.910 ], 00:09:57.910 "driver_specific": { 00:09:57.910 "raid": { 00:09:57.910 "uuid": "913763aa-22a5-4df9-b7be-b6919fda4074", 00:09:57.910 "strip_size_kb": 0, 00:09:57.910 "state": "online", 00:09:57.910 "raid_level": "raid1", 00:09:57.910 "superblock": false, 00:09:57.910 "num_base_bdevs": 3, 00:09:57.910 "num_base_bdevs_discovered": 3, 00:09:57.910 "num_base_bdevs_operational": 3, 00:09:57.910 "base_bdevs_list": [ 00:09:57.910 { 00:09:57.910 "name": "NewBaseBdev", 00:09:57.910 "uuid": "52086c58-81f3-4ae6-be8d-c895de45aa62", 00:09:57.910 "is_configured": true, 00:09:57.910 "data_offset": 0, 00:09:57.910 "data_size": 65536 00:09:57.910 }, 00:09:57.910 { 00:09:57.910 "name": "BaseBdev2", 00:09:57.910 "uuid": "3b0b54e7-4658-458c-bfcb-d33ea42f589c", 00:09:57.910 "is_configured": true, 00:09:57.910 "data_offset": 0, 00:09:57.910 "data_size": 65536 00:09:57.910 }, 00:09:57.910 { 00:09:57.910 "name": "BaseBdev3", 00:09:57.910 "uuid": "73e5d78c-a55f-45e2-9222-a7b77b8bf77b", 00:09:57.910 "is_configured": true, 00:09:57.910 "data_offset": 0, 00:09:57.910 "data_size": 65536 00:09:57.910 } 00:09:57.910 ] 00:09:57.910 } 00:09:57.910 } 00:09:57.910 }' 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:57.910 BaseBdev2 00:09:57.910 BaseBdev3' 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.910 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.168 [2024-11-26 18:56:49.304886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.168 [2024-11-26 18:56:49.304938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.168 [2024-11-26 18:56:49.305037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.168 [2024-11-26 18:56:49.305403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.168 [2024-11-26 18:56:49.305421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67478 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67478 ']' 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67478 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67478 00:09:58.168 killing process with pid 67478 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67478' 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67478 00:09:58.168 [2024-11-26 18:56:49.349215] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.168 18:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67478 00:09:58.427 [2024-11-26 18:56:49.623618] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.805 ************************************ 00:09:59.805 END TEST raid_state_function_test 00:09:59.805 ************************************ 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:59.805 00:09:59.805 real 0m12.053s 00:09:59.805 user 0m19.933s 00:09:59.805 sys 0m1.637s 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.805 18:56:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:59.805 18:56:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:59.805 18:56:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.805 18:56:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.805 ************************************ 00:09:59.805 START TEST raid_state_function_test_sb 00:09:59.805 ************************************ 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:59.805 Process raid pid: 68116 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68116 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68116' 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68116 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68116 ']' 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.805 18:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.805 [2024-11-26 18:56:50.916377] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:09:59.805 [2024-11-26 18:56:50.916548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.805 [2024-11-26 18:56:51.109050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.063 [2024-11-26 18:56:51.271341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.322 [2024-11-26 18:56:51.488745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.323 [2024-11-26 18:56:51.488820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.890 [2024-11-26 18:56:51.969830] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.890 [2024-11-26 18:56:51.969908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.890 [2024-11-26 18:56:51.969928] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.890 [2024-11-26 18:56:51.969944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.890 [2024-11-26 18:56:51.969954] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.890 [2024-11-26 18:56:51.969968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.890 18:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.890 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.890 "name": "Existed_Raid", 00:10:00.890 "uuid": "8fecd2ba-dac8-4d27-8701-9fc9cf5e7072", 00:10:00.890 "strip_size_kb": 0, 00:10:00.890 "state": "configuring", 00:10:00.890 "raid_level": "raid1", 00:10:00.890 "superblock": true, 00:10:00.890 "num_base_bdevs": 3, 00:10:00.890 "num_base_bdevs_discovered": 0, 00:10:00.890 "num_base_bdevs_operational": 3, 00:10:00.890 "base_bdevs_list": [ 00:10:00.890 { 00:10:00.890 "name": "BaseBdev1", 00:10:00.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.890 "is_configured": false, 00:10:00.890 "data_offset": 0, 00:10:00.890 "data_size": 0 00:10:00.890 }, 00:10:00.890 { 00:10:00.890 "name": "BaseBdev2", 00:10:00.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.890 "is_configured": false, 00:10:00.890 "data_offset": 0, 00:10:00.890 "data_size": 0 00:10:00.890 }, 00:10:00.890 { 00:10:00.890 "name": "BaseBdev3", 00:10:00.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.890 "is_configured": false, 00:10:00.890 "data_offset": 0, 00:10:00.890 "data_size": 0 00:10:00.891 } 00:10:00.891 ] 00:10:00.891 }' 00:10:00.891 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.891 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.457 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.457 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.457 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.457 [2024-11-26 18:56:52.529900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.457 [2024-11-26 18:56:52.529957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:01.457 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.457 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.457 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.457 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.457 [2024-11-26 18:56:52.541870] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.458 [2024-11-26 18:56:52.541958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.458 [2024-11-26 18:56:52.541973] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.458 [2024-11-26 18:56:52.541988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.458 [2024-11-26 18:56:52.541997] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.458 [2024-11-26 18:56:52.542011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.458 [2024-11-26 18:56:52.586395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.458 BaseBdev1 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.458 [ 00:10:01.458 { 00:10:01.458 "name": "BaseBdev1", 00:10:01.458 "aliases": [ 00:10:01.458 "218cc21e-a55b-4b83-957c-d9bc96fd26be" 00:10:01.458 ], 00:10:01.458 "product_name": "Malloc disk", 00:10:01.458 "block_size": 512, 00:10:01.458 "num_blocks": 65536, 00:10:01.458 "uuid": "218cc21e-a55b-4b83-957c-d9bc96fd26be", 00:10:01.458 "assigned_rate_limits": { 00:10:01.458 "rw_ios_per_sec": 0, 00:10:01.458 "rw_mbytes_per_sec": 0, 00:10:01.458 "r_mbytes_per_sec": 0, 00:10:01.458 "w_mbytes_per_sec": 0 00:10:01.458 }, 00:10:01.458 "claimed": true, 00:10:01.458 "claim_type": "exclusive_write", 00:10:01.458 "zoned": false, 00:10:01.458 "supported_io_types": { 00:10:01.458 "read": true, 00:10:01.458 "write": true, 00:10:01.458 "unmap": true, 00:10:01.458 "flush": true, 00:10:01.458 "reset": true, 00:10:01.458 "nvme_admin": false, 00:10:01.458 "nvme_io": false, 00:10:01.458 "nvme_io_md": false, 00:10:01.458 "write_zeroes": true, 00:10:01.458 "zcopy": true, 00:10:01.458 "get_zone_info": false, 00:10:01.458 "zone_management": false, 00:10:01.458 "zone_append": false, 00:10:01.458 "compare": false, 00:10:01.458 "compare_and_write": false, 00:10:01.458 "abort": true, 00:10:01.458 "seek_hole": false, 00:10:01.458 "seek_data": false, 00:10:01.458 "copy": true, 00:10:01.458 "nvme_iov_md": false 00:10:01.458 }, 00:10:01.458 "memory_domains": [ 00:10:01.458 { 00:10:01.458 "dma_device_id": "system", 00:10:01.458 "dma_device_type": 1 00:10:01.458 }, 00:10:01.458 { 00:10:01.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.458 "dma_device_type": 2 00:10:01.458 } 00:10:01.458 ], 00:10:01.458 "driver_specific": {} 00:10:01.458 } 00:10:01.458 ] 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.458 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.458 "name": "Existed_Raid", 00:10:01.458 "uuid": "02938df5-d1a8-41fb-ae67-580bfd720569", 00:10:01.458 "strip_size_kb": 0, 00:10:01.458 "state": "configuring", 00:10:01.458 "raid_level": "raid1", 00:10:01.458 "superblock": true, 00:10:01.459 "num_base_bdevs": 3, 00:10:01.459 "num_base_bdevs_discovered": 1, 00:10:01.459 "num_base_bdevs_operational": 3, 00:10:01.459 "base_bdevs_list": [ 00:10:01.459 { 00:10:01.459 "name": "BaseBdev1", 00:10:01.459 "uuid": "218cc21e-a55b-4b83-957c-d9bc96fd26be", 00:10:01.459 "is_configured": true, 00:10:01.459 "data_offset": 2048, 00:10:01.459 "data_size": 63488 00:10:01.459 }, 00:10:01.459 { 00:10:01.459 "name": "BaseBdev2", 00:10:01.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.459 "is_configured": false, 00:10:01.459 "data_offset": 0, 00:10:01.459 "data_size": 0 00:10:01.459 }, 00:10:01.459 { 00:10:01.459 "name": "BaseBdev3", 00:10:01.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.459 "is_configured": false, 00:10:01.459 "data_offset": 0, 00:10:01.459 "data_size": 0 00:10:01.459 } 00:10:01.459 ] 00:10:01.459 }' 00:10:01.459 18:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.459 18:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.085 [2024-11-26 18:56:53.138666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.085 [2024-11-26 18:56:53.138746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.085 [2024-11-26 18:56:53.146720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.085 [2024-11-26 18:56:53.149320] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.085 [2024-11-26 18:56:53.149380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.085 [2024-11-26 18:56:53.149396] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.085 [2024-11-26 18:56:53.149411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.085 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.085 "name": "Existed_Raid", 00:10:02.085 "uuid": "53078b78-5f81-4e82-b468-aa186223cdca", 00:10:02.085 "strip_size_kb": 0, 00:10:02.085 "state": "configuring", 00:10:02.085 "raid_level": "raid1", 00:10:02.085 "superblock": true, 00:10:02.085 "num_base_bdevs": 3, 00:10:02.085 "num_base_bdevs_discovered": 1, 00:10:02.085 "num_base_bdevs_operational": 3, 00:10:02.085 "base_bdevs_list": [ 00:10:02.085 { 00:10:02.085 "name": "BaseBdev1", 00:10:02.085 "uuid": "218cc21e-a55b-4b83-957c-d9bc96fd26be", 00:10:02.085 "is_configured": true, 00:10:02.085 "data_offset": 2048, 00:10:02.085 "data_size": 63488 00:10:02.085 }, 00:10:02.085 { 00:10:02.085 "name": "BaseBdev2", 00:10:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.085 "is_configured": false, 00:10:02.085 "data_offset": 0, 00:10:02.085 "data_size": 0 00:10:02.085 }, 00:10:02.085 { 00:10:02.085 "name": "BaseBdev3", 00:10:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.086 "is_configured": false, 00:10:02.086 "data_offset": 0, 00:10:02.086 "data_size": 0 00:10:02.086 } 00:10:02.086 ] 00:10:02.086 }' 00:10:02.086 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.086 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.344 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.344 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.344 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.344 [2024-11-26 18:56:53.705233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.602 BaseBdev2 00:10:02.602 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.602 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:02.602 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.602 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.602 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.602 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.602 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.602 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.602 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.603 [ 00:10:02.603 { 00:10:02.603 "name": "BaseBdev2", 00:10:02.603 "aliases": [ 00:10:02.603 "f0a06c44-bdf2-4c51-a1aa-91ca3c459918" 00:10:02.603 ], 00:10:02.603 "product_name": "Malloc disk", 00:10:02.603 "block_size": 512, 00:10:02.603 "num_blocks": 65536, 00:10:02.603 "uuid": "f0a06c44-bdf2-4c51-a1aa-91ca3c459918", 00:10:02.603 "assigned_rate_limits": { 00:10:02.603 "rw_ios_per_sec": 0, 00:10:02.603 "rw_mbytes_per_sec": 0, 00:10:02.603 "r_mbytes_per_sec": 0, 00:10:02.603 "w_mbytes_per_sec": 0 00:10:02.603 }, 00:10:02.603 "claimed": true, 00:10:02.603 "claim_type": "exclusive_write", 00:10:02.603 "zoned": false, 00:10:02.603 "supported_io_types": { 00:10:02.603 "read": true, 00:10:02.603 "write": true, 00:10:02.603 "unmap": true, 00:10:02.603 "flush": true, 00:10:02.603 "reset": true, 00:10:02.603 "nvme_admin": false, 00:10:02.603 "nvme_io": false, 00:10:02.603 "nvme_io_md": false, 00:10:02.603 "write_zeroes": true, 00:10:02.603 "zcopy": true, 00:10:02.603 "get_zone_info": false, 00:10:02.603 "zone_management": false, 00:10:02.603 "zone_append": false, 00:10:02.603 "compare": false, 00:10:02.603 "compare_and_write": false, 00:10:02.603 "abort": true, 00:10:02.603 "seek_hole": false, 00:10:02.603 "seek_data": false, 00:10:02.603 "copy": true, 00:10:02.603 "nvme_iov_md": false 00:10:02.603 }, 00:10:02.603 "memory_domains": [ 00:10:02.603 { 00:10:02.603 "dma_device_id": "system", 00:10:02.603 "dma_device_type": 1 00:10:02.603 }, 00:10:02.603 { 00:10:02.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.603 "dma_device_type": 2 00:10:02.603 } 00:10:02.603 ], 00:10:02.603 "driver_specific": {} 00:10:02.603 } 00:10:02.603 ] 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.603 "name": "Existed_Raid", 00:10:02.603 "uuid": "53078b78-5f81-4e82-b468-aa186223cdca", 00:10:02.603 "strip_size_kb": 0, 00:10:02.603 "state": "configuring", 00:10:02.603 "raid_level": "raid1", 00:10:02.603 "superblock": true, 00:10:02.603 "num_base_bdevs": 3, 00:10:02.603 "num_base_bdevs_discovered": 2, 00:10:02.603 "num_base_bdevs_operational": 3, 00:10:02.603 "base_bdevs_list": [ 00:10:02.603 { 00:10:02.603 "name": "BaseBdev1", 00:10:02.603 "uuid": "218cc21e-a55b-4b83-957c-d9bc96fd26be", 00:10:02.603 "is_configured": true, 00:10:02.603 "data_offset": 2048, 00:10:02.603 "data_size": 63488 00:10:02.603 }, 00:10:02.603 { 00:10:02.603 "name": "BaseBdev2", 00:10:02.603 "uuid": "f0a06c44-bdf2-4c51-a1aa-91ca3c459918", 00:10:02.603 "is_configured": true, 00:10:02.603 "data_offset": 2048, 00:10:02.603 "data_size": 63488 00:10:02.603 }, 00:10:02.603 { 00:10:02.603 "name": "BaseBdev3", 00:10:02.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.603 "is_configured": false, 00:10:02.603 "data_offset": 0, 00:10:02.603 "data_size": 0 00:10:02.603 } 00:10:02.603 ] 00:10:02.603 }' 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.603 18:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.170 [2024-11-26 18:56:54.303285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.170 [2024-11-26 18:56:54.303681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:03.170 [2024-11-26 18:56:54.303731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:03.170 [2024-11-26 18:56:54.304185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:03.170 [2024-11-26 18:56:54.304457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:03.170 [2024-11-26 18:56:54.304486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:03.170 BaseBdev3 00:10:03.170 [2024-11-26 18:56:54.304710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.170 [ 00:10:03.170 { 00:10:03.170 "name": "BaseBdev3", 00:10:03.170 "aliases": [ 00:10:03.170 "7fada666-887c-4b06-b3b6-e4231302a543" 00:10:03.170 ], 00:10:03.170 "product_name": "Malloc disk", 00:10:03.170 "block_size": 512, 00:10:03.170 "num_blocks": 65536, 00:10:03.170 "uuid": "7fada666-887c-4b06-b3b6-e4231302a543", 00:10:03.170 "assigned_rate_limits": { 00:10:03.170 "rw_ios_per_sec": 0, 00:10:03.170 "rw_mbytes_per_sec": 0, 00:10:03.170 "r_mbytes_per_sec": 0, 00:10:03.170 "w_mbytes_per_sec": 0 00:10:03.170 }, 00:10:03.170 "claimed": true, 00:10:03.170 "claim_type": "exclusive_write", 00:10:03.170 "zoned": false, 00:10:03.170 "supported_io_types": { 00:10:03.170 "read": true, 00:10:03.170 "write": true, 00:10:03.170 "unmap": true, 00:10:03.170 "flush": true, 00:10:03.170 "reset": true, 00:10:03.170 "nvme_admin": false, 00:10:03.170 "nvme_io": false, 00:10:03.170 "nvme_io_md": false, 00:10:03.170 "write_zeroes": true, 00:10:03.170 "zcopy": true, 00:10:03.170 "get_zone_info": false, 00:10:03.170 "zone_management": false, 00:10:03.170 "zone_append": false, 00:10:03.170 "compare": false, 00:10:03.170 "compare_and_write": false, 00:10:03.170 "abort": true, 00:10:03.170 "seek_hole": false, 00:10:03.170 "seek_data": false, 00:10:03.170 "copy": true, 00:10:03.170 "nvme_iov_md": false 00:10:03.170 }, 00:10:03.170 "memory_domains": [ 00:10:03.170 { 00:10:03.170 "dma_device_id": "system", 00:10:03.170 "dma_device_type": 1 00:10:03.170 }, 00:10:03.170 { 00:10:03.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.170 "dma_device_type": 2 00:10:03.170 } 00:10:03.170 ], 00:10:03.170 "driver_specific": {} 00:10:03.170 } 00:10:03.170 ] 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.170 "name": "Existed_Raid", 00:10:03.170 "uuid": "53078b78-5f81-4e82-b468-aa186223cdca", 00:10:03.170 "strip_size_kb": 0, 00:10:03.170 "state": "online", 00:10:03.170 "raid_level": "raid1", 00:10:03.170 "superblock": true, 00:10:03.170 "num_base_bdevs": 3, 00:10:03.170 "num_base_bdevs_discovered": 3, 00:10:03.170 "num_base_bdevs_operational": 3, 00:10:03.170 "base_bdevs_list": [ 00:10:03.170 { 00:10:03.170 "name": "BaseBdev1", 00:10:03.170 "uuid": "218cc21e-a55b-4b83-957c-d9bc96fd26be", 00:10:03.170 "is_configured": true, 00:10:03.170 "data_offset": 2048, 00:10:03.170 "data_size": 63488 00:10:03.170 }, 00:10:03.170 { 00:10:03.170 "name": "BaseBdev2", 00:10:03.170 "uuid": "f0a06c44-bdf2-4c51-a1aa-91ca3c459918", 00:10:03.170 "is_configured": true, 00:10:03.170 "data_offset": 2048, 00:10:03.170 "data_size": 63488 00:10:03.170 }, 00:10:03.170 { 00:10:03.170 "name": "BaseBdev3", 00:10:03.170 "uuid": "7fada666-887c-4b06-b3b6-e4231302a543", 00:10:03.170 "is_configured": true, 00:10:03.170 "data_offset": 2048, 00:10:03.170 "data_size": 63488 00:10:03.170 } 00:10:03.170 ] 00:10:03.170 }' 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.170 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.737 [2024-11-26 18:56:54.835939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.737 "name": "Existed_Raid", 00:10:03.737 "aliases": [ 00:10:03.737 "53078b78-5f81-4e82-b468-aa186223cdca" 00:10:03.737 ], 00:10:03.737 "product_name": "Raid Volume", 00:10:03.737 "block_size": 512, 00:10:03.737 "num_blocks": 63488, 00:10:03.737 "uuid": "53078b78-5f81-4e82-b468-aa186223cdca", 00:10:03.737 "assigned_rate_limits": { 00:10:03.737 "rw_ios_per_sec": 0, 00:10:03.737 "rw_mbytes_per_sec": 0, 00:10:03.737 "r_mbytes_per_sec": 0, 00:10:03.737 "w_mbytes_per_sec": 0 00:10:03.737 }, 00:10:03.737 "claimed": false, 00:10:03.737 "zoned": false, 00:10:03.737 "supported_io_types": { 00:10:03.737 "read": true, 00:10:03.737 "write": true, 00:10:03.737 "unmap": false, 00:10:03.737 "flush": false, 00:10:03.737 "reset": true, 00:10:03.737 "nvme_admin": false, 00:10:03.737 "nvme_io": false, 00:10:03.737 "nvme_io_md": false, 00:10:03.737 "write_zeroes": true, 00:10:03.737 "zcopy": false, 00:10:03.737 "get_zone_info": false, 00:10:03.737 "zone_management": false, 00:10:03.737 "zone_append": false, 00:10:03.737 "compare": false, 00:10:03.737 "compare_and_write": false, 00:10:03.737 "abort": false, 00:10:03.737 "seek_hole": false, 00:10:03.737 "seek_data": false, 00:10:03.737 "copy": false, 00:10:03.737 "nvme_iov_md": false 00:10:03.737 }, 00:10:03.737 "memory_domains": [ 00:10:03.737 { 00:10:03.737 "dma_device_id": "system", 00:10:03.737 "dma_device_type": 1 00:10:03.737 }, 00:10:03.737 { 00:10:03.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.737 "dma_device_type": 2 00:10:03.737 }, 00:10:03.737 { 00:10:03.737 "dma_device_id": "system", 00:10:03.737 "dma_device_type": 1 00:10:03.737 }, 00:10:03.737 { 00:10:03.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.737 "dma_device_type": 2 00:10:03.737 }, 00:10:03.737 { 00:10:03.737 "dma_device_id": "system", 00:10:03.737 "dma_device_type": 1 00:10:03.737 }, 00:10:03.737 { 00:10:03.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.737 "dma_device_type": 2 00:10:03.737 } 00:10:03.737 ], 00:10:03.737 "driver_specific": { 00:10:03.737 "raid": { 00:10:03.737 "uuid": "53078b78-5f81-4e82-b468-aa186223cdca", 00:10:03.737 "strip_size_kb": 0, 00:10:03.737 "state": "online", 00:10:03.737 "raid_level": "raid1", 00:10:03.737 "superblock": true, 00:10:03.737 "num_base_bdevs": 3, 00:10:03.737 "num_base_bdevs_discovered": 3, 00:10:03.737 "num_base_bdevs_operational": 3, 00:10:03.737 "base_bdevs_list": [ 00:10:03.737 { 00:10:03.737 "name": "BaseBdev1", 00:10:03.737 "uuid": "218cc21e-a55b-4b83-957c-d9bc96fd26be", 00:10:03.737 "is_configured": true, 00:10:03.737 "data_offset": 2048, 00:10:03.737 "data_size": 63488 00:10:03.737 }, 00:10:03.737 { 00:10:03.737 "name": "BaseBdev2", 00:10:03.737 "uuid": "f0a06c44-bdf2-4c51-a1aa-91ca3c459918", 00:10:03.737 "is_configured": true, 00:10:03.737 "data_offset": 2048, 00:10:03.737 "data_size": 63488 00:10:03.737 }, 00:10:03.737 { 00:10:03.737 "name": "BaseBdev3", 00:10:03.737 "uuid": "7fada666-887c-4b06-b3b6-e4231302a543", 00:10:03.737 "is_configured": true, 00:10:03.737 "data_offset": 2048, 00:10:03.737 "data_size": 63488 00:10:03.737 } 00:10:03.737 ] 00:10:03.737 } 00:10:03.737 } 00:10:03.737 }' 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:03.737 BaseBdev2 00:10:03.737 BaseBdev3' 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.737 18:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.737 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:03.738 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.738 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.738 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.738 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.996 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.997 [2024-11-26 18:56:55.127701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.997 "name": "Existed_Raid", 00:10:03.997 "uuid": "53078b78-5f81-4e82-b468-aa186223cdca", 00:10:03.997 "strip_size_kb": 0, 00:10:03.997 "state": "online", 00:10:03.997 "raid_level": "raid1", 00:10:03.997 "superblock": true, 00:10:03.997 "num_base_bdevs": 3, 00:10:03.997 "num_base_bdevs_discovered": 2, 00:10:03.997 "num_base_bdevs_operational": 2, 00:10:03.997 "base_bdevs_list": [ 00:10:03.997 { 00:10:03.997 "name": null, 00:10:03.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.997 "is_configured": false, 00:10:03.997 "data_offset": 0, 00:10:03.997 "data_size": 63488 00:10:03.997 }, 00:10:03.997 { 00:10:03.997 "name": "BaseBdev2", 00:10:03.997 "uuid": "f0a06c44-bdf2-4c51-a1aa-91ca3c459918", 00:10:03.997 "is_configured": true, 00:10:03.997 "data_offset": 2048, 00:10:03.997 "data_size": 63488 00:10:03.997 }, 00:10:03.997 { 00:10:03.997 "name": "BaseBdev3", 00:10:03.997 "uuid": "7fada666-887c-4b06-b3b6-e4231302a543", 00:10:03.997 "is_configured": true, 00:10:03.997 "data_offset": 2048, 00:10:03.997 "data_size": 63488 00:10:03.997 } 00:10:03.997 ] 00:10:03.997 }' 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.997 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.563 [2024-11-26 18:56:55.794814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.563 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.821 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.821 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.821 18:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:04.821 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.821 18:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.821 [2024-11-26 18:56:55.946229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.821 [2024-11-26 18:56:55.946379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.821 [2024-11-26 18:56:56.034582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.821 [2024-11-26 18:56:56.034652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.821 [2024-11-26 18:56:56.034674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.821 BaseBdev2 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.821 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.821 [ 00:10:04.821 { 00:10:04.821 "name": "BaseBdev2", 00:10:04.821 "aliases": [ 00:10:04.821 "c2eb4884-44f5-4d22-8586-d00880d9f757" 00:10:04.821 ], 00:10:04.821 "product_name": "Malloc disk", 00:10:04.821 "block_size": 512, 00:10:04.821 "num_blocks": 65536, 00:10:04.821 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:04.821 "assigned_rate_limits": { 00:10:04.821 "rw_ios_per_sec": 0, 00:10:04.821 "rw_mbytes_per_sec": 0, 00:10:04.821 "r_mbytes_per_sec": 0, 00:10:04.822 "w_mbytes_per_sec": 0 00:10:04.822 }, 00:10:04.822 "claimed": false, 00:10:04.822 "zoned": false, 00:10:04.822 "supported_io_types": { 00:10:04.822 "read": true, 00:10:04.822 "write": true, 00:10:04.822 "unmap": true, 00:10:04.822 "flush": true, 00:10:04.822 "reset": true, 00:10:04.822 "nvme_admin": false, 00:10:04.822 "nvme_io": false, 00:10:04.822 "nvme_io_md": false, 00:10:04.822 "write_zeroes": true, 00:10:04.822 "zcopy": true, 00:10:04.822 "get_zone_info": false, 00:10:04.822 "zone_management": false, 00:10:04.822 "zone_append": false, 00:10:04.822 "compare": false, 00:10:04.822 "compare_and_write": false, 00:10:04.822 "abort": true, 00:10:04.822 "seek_hole": false, 00:10:04.822 "seek_data": false, 00:10:04.822 "copy": true, 00:10:04.822 "nvme_iov_md": false 00:10:04.822 }, 00:10:04.822 "memory_domains": [ 00:10:04.822 { 00:10:04.822 "dma_device_id": "system", 00:10:04.822 "dma_device_type": 1 00:10:04.822 }, 00:10:04.822 { 00:10:04.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.822 "dma_device_type": 2 00:10:04.822 } 00:10:04.822 ], 00:10:04.822 "driver_specific": {} 00:10:04.822 } 00:10:04.822 ] 00:10:04.822 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.822 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.822 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.822 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.822 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.822 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.822 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.081 BaseBdev3 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.081 [ 00:10:05.081 { 00:10:05.081 "name": "BaseBdev3", 00:10:05.081 "aliases": [ 00:10:05.081 "ab580804-47dd-4409-861f-1944022bf6d3" 00:10:05.081 ], 00:10:05.081 "product_name": "Malloc disk", 00:10:05.081 "block_size": 512, 00:10:05.081 "num_blocks": 65536, 00:10:05.081 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:05.081 "assigned_rate_limits": { 00:10:05.081 "rw_ios_per_sec": 0, 00:10:05.081 "rw_mbytes_per_sec": 0, 00:10:05.081 "r_mbytes_per_sec": 0, 00:10:05.081 "w_mbytes_per_sec": 0 00:10:05.081 }, 00:10:05.081 "claimed": false, 00:10:05.081 "zoned": false, 00:10:05.081 "supported_io_types": { 00:10:05.081 "read": true, 00:10:05.081 "write": true, 00:10:05.081 "unmap": true, 00:10:05.081 "flush": true, 00:10:05.081 "reset": true, 00:10:05.081 "nvme_admin": false, 00:10:05.081 "nvme_io": false, 00:10:05.081 "nvme_io_md": false, 00:10:05.081 "write_zeroes": true, 00:10:05.081 "zcopy": true, 00:10:05.081 "get_zone_info": false, 00:10:05.081 "zone_management": false, 00:10:05.081 "zone_append": false, 00:10:05.081 "compare": false, 00:10:05.081 "compare_and_write": false, 00:10:05.081 "abort": true, 00:10:05.081 "seek_hole": false, 00:10:05.081 "seek_data": false, 00:10:05.081 "copy": true, 00:10:05.081 "nvme_iov_md": false 00:10:05.081 }, 00:10:05.081 "memory_domains": [ 00:10:05.081 { 00:10:05.081 "dma_device_id": "system", 00:10:05.081 "dma_device_type": 1 00:10:05.081 }, 00:10:05.081 { 00:10:05.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.081 "dma_device_type": 2 00:10:05.081 } 00:10:05.081 ], 00:10:05.081 "driver_specific": {} 00:10:05.081 } 00:10:05.081 ] 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.081 [2024-11-26 18:56:56.253345] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.081 [2024-11-26 18:56:56.253405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.081 [2024-11-26 18:56:56.253437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.081 [2024-11-26 18:56:56.256197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.081 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.081 "name": "Existed_Raid", 00:10:05.081 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:05.081 "strip_size_kb": 0, 00:10:05.081 "state": "configuring", 00:10:05.081 "raid_level": "raid1", 00:10:05.081 "superblock": true, 00:10:05.081 "num_base_bdevs": 3, 00:10:05.081 "num_base_bdevs_discovered": 2, 00:10:05.081 "num_base_bdevs_operational": 3, 00:10:05.081 "base_bdevs_list": [ 00:10:05.081 { 00:10:05.081 "name": "BaseBdev1", 00:10:05.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.081 "is_configured": false, 00:10:05.081 "data_offset": 0, 00:10:05.081 "data_size": 0 00:10:05.081 }, 00:10:05.081 { 00:10:05.081 "name": "BaseBdev2", 00:10:05.081 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:05.081 "is_configured": true, 00:10:05.081 "data_offset": 2048, 00:10:05.081 "data_size": 63488 00:10:05.082 }, 00:10:05.082 { 00:10:05.082 "name": "BaseBdev3", 00:10:05.082 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:05.082 "is_configured": true, 00:10:05.082 "data_offset": 2048, 00:10:05.082 "data_size": 63488 00:10:05.082 } 00:10:05.082 ] 00:10:05.082 }' 00:10:05.082 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.082 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.649 [2024-11-26 18:56:56.749492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.649 "name": "Existed_Raid", 00:10:05.649 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:05.649 "strip_size_kb": 0, 00:10:05.649 "state": "configuring", 00:10:05.649 "raid_level": "raid1", 00:10:05.649 "superblock": true, 00:10:05.649 "num_base_bdevs": 3, 00:10:05.649 "num_base_bdevs_discovered": 1, 00:10:05.649 "num_base_bdevs_operational": 3, 00:10:05.649 "base_bdevs_list": [ 00:10:05.649 { 00:10:05.649 "name": "BaseBdev1", 00:10:05.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.649 "is_configured": false, 00:10:05.649 "data_offset": 0, 00:10:05.649 "data_size": 0 00:10:05.649 }, 00:10:05.649 { 00:10:05.649 "name": null, 00:10:05.649 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:05.649 "is_configured": false, 00:10:05.649 "data_offset": 0, 00:10:05.649 "data_size": 63488 00:10:05.649 }, 00:10:05.649 { 00:10:05.649 "name": "BaseBdev3", 00:10:05.649 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:05.649 "is_configured": true, 00:10:05.649 "data_offset": 2048, 00:10:05.649 "data_size": 63488 00:10:05.649 } 00:10:05.649 ] 00:10:05.649 }' 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.649 18:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.908 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.908 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.908 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.908 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.168 [2024-11-26 18:56:57.376608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.168 BaseBdev1 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.168 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.169 [ 00:10:06.169 { 00:10:06.169 "name": "BaseBdev1", 00:10:06.169 "aliases": [ 00:10:06.169 "af20fabd-c5fd-4487-8518-6fdcde6e4a21" 00:10:06.169 ], 00:10:06.169 "product_name": "Malloc disk", 00:10:06.169 "block_size": 512, 00:10:06.169 "num_blocks": 65536, 00:10:06.169 "uuid": "af20fabd-c5fd-4487-8518-6fdcde6e4a21", 00:10:06.169 "assigned_rate_limits": { 00:10:06.169 "rw_ios_per_sec": 0, 00:10:06.169 "rw_mbytes_per_sec": 0, 00:10:06.169 "r_mbytes_per_sec": 0, 00:10:06.169 "w_mbytes_per_sec": 0 00:10:06.169 }, 00:10:06.169 "claimed": true, 00:10:06.169 "claim_type": "exclusive_write", 00:10:06.169 "zoned": false, 00:10:06.169 "supported_io_types": { 00:10:06.169 "read": true, 00:10:06.169 "write": true, 00:10:06.169 "unmap": true, 00:10:06.169 "flush": true, 00:10:06.169 "reset": true, 00:10:06.169 "nvme_admin": false, 00:10:06.169 "nvme_io": false, 00:10:06.169 "nvme_io_md": false, 00:10:06.169 "write_zeroes": true, 00:10:06.169 "zcopy": true, 00:10:06.169 "get_zone_info": false, 00:10:06.169 "zone_management": false, 00:10:06.169 "zone_append": false, 00:10:06.169 "compare": false, 00:10:06.169 "compare_and_write": false, 00:10:06.169 "abort": true, 00:10:06.169 "seek_hole": false, 00:10:06.169 "seek_data": false, 00:10:06.169 "copy": true, 00:10:06.169 "nvme_iov_md": false 00:10:06.169 }, 00:10:06.169 "memory_domains": [ 00:10:06.169 { 00:10:06.169 "dma_device_id": "system", 00:10:06.169 "dma_device_type": 1 00:10:06.169 }, 00:10:06.169 { 00:10:06.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.169 "dma_device_type": 2 00:10:06.169 } 00:10:06.169 ], 00:10:06.169 "driver_specific": {} 00:10:06.169 } 00:10:06.169 ] 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.169 "name": "Existed_Raid", 00:10:06.169 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:06.169 "strip_size_kb": 0, 00:10:06.169 "state": "configuring", 00:10:06.169 "raid_level": "raid1", 00:10:06.169 "superblock": true, 00:10:06.169 "num_base_bdevs": 3, 00:10:06.169 "num_base_bdevs_discovered": 2, 00:10:06.169 "num_base_bdevs_operational": 3, 00:10:06.169 "base_bdevs_list": [ 00:10:06.169 { 00:10:06.169 "name": "BaseBdev1", 00:10:06.169 "uuid": "af20fabd-c5fd-4487-8518-6fdcde6e4a21", 00:10:06.169 "is_configured": true, 00:10:06.169 "data_offset": 2048, 00:10:06.169 "data_size": 63488 00:10:06.169 }, 00:10:06.169 { 00:10:06.169 "name": null, 00:10:06.169 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:06.169 "is_configured": false, 00:10:06.169 "data_offset": 0, 00:10:06.169 "data_size": 63488 00:10:06.169 }, 00:10:06.169 { 00:10:06.169 "name": "BaseBdev3", 00:10:06.169 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:06.169 "is_configured": true, 00:10:06.169 "data_offset": 2048, 00:10:06.169 "data_size": 63488 00:10:06.169 } 00:10:06.169 ] 00:10:06.169 }' 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.169 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 [2024-11-26 18:56:57.948818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 18:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.736 "name": "Existed_Raid", 00:10:06.736 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:06.736 "strip_size_kb": 0, 00:10:06.736 "state": "configuring", 00:10:06.736 "raid_level": "raid1", 00:10:06.736 "superblock": true, 00:10:06.736 "num_base_bdevs": 3, 00:10:06.736 "num_base_bdevs_discovered": 1, 00:10:06.736 "num_base_bdevs_operational": 3, 00:10:06.736 "base_bdevs_list": [ 00:10:06.736 { 00:10:06.736 "name": "BaseBdev1", 00:10:06.736 "uuid": "af20fabd-c5fd-4487-8518-6fdcde6e4a21", 00:10:06.736 "is_configured": true, 00:10:06.736 "data_offset": 2048, 00:10:06.736 "data_size": 63488 00:10:06.736 }, 00:10:06.736 { 00:10:06.736 "name": null, 00:10:06.736 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:06.736 "is_configured": false, 00:10:06.736 "data_offset": 0, 00:10:06.736 "data_size": 63488 00:10:06.736 }, 00:10:06.736 { 00:10:06.736 "name": null, 00:10:06.736 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:06.736 "is_configured": false, 00:10:06.736 "data_offset": 0, 00:10:06.736 "data_size": 63488 00:10:06.736 } 00:10:06.736 ] 00:10:06.736 }' 00:10:06.736 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.736 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.304 [2024-11-26 18:56:58.509029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.304 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.304 "name": "Existed_Raid", 00:10:07.304 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:07.304 "strip_size_kb": 0, 00:10:07.304 "state": "configuring", 00:10:07.304 "raid_level": "raid1", 00:10:07.304 "superblock": true, 00:10:07.304 "num_base_bdevs": 3, 00:10:07.304 "num_base_bdevs_discovered": 2, 00:10:07.304 "num_base_bdevs_operational": 3, 00:10:07.304 "base_bdevs_list": [ 00:10:07.305 { 00:10:07.305 "name": "BaseBdev1", 00:10:07.305 "uuid": "af20fabd-c5fd-4487-8518-6fdcde6e4a21", 00:10:07.305 "is_configured": true, 00:10:07.305 "data_offset": 2048, 00:10:07.305 "data_size": 63488 00:10:07.305 }, 00:10:07.305 { 00:10:07.305 "name": null, 00:10:07.305 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:07.305 "is_configured": false, 00:10:07.305 "data_offset": 0, 00:10:07.305 "data_size": 63488 00:10:07.305 }, 00:10:07.305 { 00:10:07.305 "name": "BaseBdev3", 00:10:07.305 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:07.305 "is_configured": true, 00:10:07.305 "data_offset": 2048, 00:10:07.305 "data_size": 63488 00:10:07.305 } 00:10:07.305 ] 00:10:07.305 }' 00:10:07.305 18:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.305 18:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.872 [2024-11-26 18:56:59.105205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.872 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.873 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.131 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.131 "name": "Existed_Raid", 00:10:08.131 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:08.131 "strip_size_kb": 0, 00:10:08.131 "state": "configuring", 00:10:08.131 "raid_level": "raid1", 00:10:08.131 "superblock": true, 00:10:08.131 "num_base_bdevs": 3, 00:10:08.131 "num_base_bdevs_discovered": 1, 00:10:08.131 "num_base_bdevs_operational": 3, 00:10:08.131 "base_bdevs_list": [ 00:10:08.131 { 00:10:08.131 "name": null, 00:10:08.131 "uuid": "af20fabd-c5fd-4487-8518-6fdcde6e4a21", 00:10:08.131 "is_configured": false, 00:10:08.131 "data_offset": 0, 00:10:08.131 "data_size": 63488 00:10:08.131 }, 00:10:08.131 { 00:10:08.131 "name": null, 00:10:08.131 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:08.131 "is_configured": false, 00:10:08.131 "data_offset": 0, 00:10:08.131 "data_size": 63488 00:10:08.131 }, 00:10:08.131 { 00:10:08.131 "name": "BaseBdev3", 00:10:08.131 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:08.131 "is_configured": true, 00:10:08.131 "data_offset": 2048, 00:10:08.131 "data_size": 63488 00:10:08.131 } 00:10:08.131 ] 00:10:08.131 }' 00:10:08.131 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.131 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.389 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.389 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.389 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.389 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.389 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.389 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:08.389 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:08.389 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.389 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.647 [2024-11-26 18:56:59.755481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.647 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.648 "name": "Existed_Raid", 00:10:08.648 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:08.648 "strip_size_kb": 0, 00:10:08.648 "state": "configuring", 00:10:08.648 "raid_level": "raid1", 00:10:08.648 "superblock": true, 00:10:08.648 "num_base_bdevs": 3, 00:10:08.648 "num_base_bdevs_discovered": 2, 00:10:08.648 "num_base_bdevs_operational": 3, 00:10:08.648 "base_bdevs_list": [ 00:10:08.648 { 00:10:08.648 "name": null, 00:10:08.648 "uuid": "af20fabd-c5fd-4487-8518-6fdcde6e4a21", 00:10:08.648 "is_configured": false, 00:10:08.648 "data_offset": 0, 00:10:08.648 "data_size": 63488 00:10:08.648 }, 00:10:08.648 { 00:10:08.648 "name": "BaseBdev2", 00:10:08.648 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:08.648 "is_configured": true, 00:10:08.648 "data_offset": 2048, 00:10:08.648 "data_size": 63488 00:10:08.648 }, 00:10:08.648 { 00:10:08.648 "name": "BaseBdev3", 00:10:08.648 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:08.648 "is_configured": true, 00:10:08.648 "data_offset": 2048, 00:10:08.648 "data_size": 63488 00:10:08.648 } 00:10:08.648 ] 00:10:08.648 }' 00:10:08.648 18:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.648 18:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.906 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.906 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.906 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.906 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u af20fabd-c5fd-4487-8518-6fdcde6e4a21 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.165 [2024-11-26 18:57:00.405676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:09.165 [2024-11-26 18:57:00.405981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:09.165 [2024-11-26 18:57:00.406001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:09.165 NewBaseBdev 00:10:09.165 [2024-11-26 18:57:00.406325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:09.165 [2024-11-26 18:57:00.406512] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:09.165 [2024-11-26 18:57:00.406534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:09.165 [2024-11-26 18:57:00.406707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.165 [ 00:10:09.165 { 00:10:09.165 "name": "NewBaseBdev", 00:10:09.165 "aliases": [ 00:10:09.165 "af20fabd-c5fd-4487-8518-6fdcde6e4a21" 00:10:09.165 ], 00:10:09.165 "product_name": "Malloc disk", 00:10:09.165 "block_size": 512, 00:10:09.165 "num_blocks": 65536, 00:10:09.165 "uuid": "af20fabd-c5fd-4487-8518-6fdcde6e4a21", 00:10:09.165 "assigned_rate_limits": { 00:10:09.165 "rw_ios_per_sec": 0, 00:10:09.165 "rw_mbytes_per_sec": 0, 00:10:09.165 "r_mbytes_per_sec": 0, 00:10:09.165 "w_mbytes_per_sec": 0 00:10:09.165 }, 00:10:09.165 "claimed": true, 00:10:09.165 "claim_type": "exclusive_write", 00:10:09.165 "zoned": false, 00:10:09.165 "supported_io_types": { 00:10:09.165 "read": true, 00:10:09.165 "write": true, 00:10:09.165 "unmap": true, 00:10:09.165 "flush": true, 00:10:09.165 "reset": true, 00:10:09.165 "nvme_admin": false, 00:10:09.165 "nvme_io": false, 00:10:09.165 "nvme_io_md": false, 00:10:09.165 "write_zeroes": true, 00:10:09.165 "zcopy": true, 00:10:09.165 "get_zone_info": false, 00:10:09.165 "zone_management": false, 00:10:09.165 "zone_append": false, 00:10:09.165 "compare": false, 00:10:09.165 "compare_and_write": false, 00:10:09.165 "abort": true, 00:10:09.165 "seek_hole": false, 00:10:09.165 "seek_data": false, 00:10:09.165 "copy": true, 00:10:09.165 "nvme_iov_md": false 00:10:09.165 }, 00:10:09.165 "memory_domains": [ 00:10:09.165 { 00:10:09.165 "dma_device_id": "system", 00:10:09.165 "dma_device_type": 1 00:10:09.165 }, 00:10:09.165 { 00:10:09.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.165 "dma_device_type": 2 00:10:09.165 } 00:10:09.165 ], 00:10:09.165 "driver_specific": {} 00:10:09.165 } 00:10:09.165 ] 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.165 "name": "Existed_Raid", 00:10:09.165 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:09.165 "strip_size_kb": 0, 00:10:09.165 "state": "online", 00:10:09.165 "raid_level": "raid1", 00:10:09.165 "superblock": true, 00:10:09.165 "num_base_bdevs": 3, 00:10:09.165 "num_base_bdevs_discovered": 3, 00:10:09.165 "num_base_bdevs_operational": 3, 00:10:09.165 "base_bdevs_list": [ 00:10:09.165 { 00:10:09.165 "name": "NewBaseBdev", 00:10:09.165 "uuid": "af20fabd-c5fd-4487-8518-6fdcde6e4a21", 00:10:09.165 "is_configured": true, 00:10:09.165 "data_offset": 2048, 00:10:09.165 "data_size": 63488 00:10:09.165 }, 00:10:09.165 { 00:10:09.165 "name": "BaseBdev2", 00:10:09.165 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:09.165 "is_configured": true, 00:10:09.165 "data_offset": 2048, 00:10:09.165 "data_size": 63488 00:10:09.165 }, 00:10:09.165 { 00:10:09.165 "name": "BaseBdev3", 00:10:09.165 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:09.165 "is_configured": true, 00:10:09.165 "data_offset": 2048, 00:10:09.165 "data_size": 63488 00:10:09.165 } 00:10:09.165 ] 00:10:09.165 }' 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.165 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.729 [2024-11-26 18:57:00.938286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.729 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.729 "name": "Existed_Raid", 00:10:09.729 "aliases": [ 00:10:09.729 "aa70fca1-b428-4b1e-9be6-e8f998510f69" 00:10:09.729 ], 00:10:09.729 "product_name": "Raid Volume", 00:10:09.729 "block_size": 512, 00:10:09.729 "num_blocks": 63488, 00:10:09.729 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:09.729 "assigned_rate_limits": { 00:10:09.729 "rw_ios_per_sec": 0, 00:10:09.729 "rw_mbytes_per_sec": 0, 00:10:09.729 "r_mbytes_per_sec": 0, 00:10:09.729 "w_mbytes_per_sec": 0 00:10:09.729 }, 00:10:09.729 "claimed": false, 00:10:09.729 "zoned": false, 00:10:09.729 "supported_io_types": { 00:10:09.729 "read": true, 00:10:09.729 "write": true, 00:10:09.729 "unmap": false, 00:10:09.729 "flush": false, 00:10:09.729 "reset": true, 00:10:09.729 "nvme_admin": false, 00:10:09.729 "nvme_io": false, 00:10:09.729 "nvme_io_md": false, 00:10:09.729 "write_zeroes": true, 00:10:09.729 "zcopy": false, 00:10:09.729 "get_zone_info": false, 00:10:09.729 "zone_management": false, 00:10:09.729 "zone_append": false, 00:10:09.729 "compare": false, 00:10:09.729 "compare_and_write": false, 00:10:09.729 "abort": false, 00:10:09.729 "seek_hole": false, 00:10:09.729 "seek_data": false, 00:10:09.729 "copy": false, 00:10:09.729 "nvme_iov_md": false 00:10:09.729 }, 00:10:09.729 "memory_domains": [ 00:10:09.729 { 00:10:09.729 "dma_device_id": "system", 00:10:09.729 "dma_device_type": 1 00:10:09.729 }, 00:10:09.729 { 00:10:09.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.729 "dma_device_type": 2 00:10:09.729 }, 00:10:09.729 { 00:10:09.729 "dma_device_id": "system", 00:10:09.729 "dma_device_type": 1 00:10:09.729 }, 00:10:09.729 { 00:10:09.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.729 "dma_device_type": 2 00:10:09.729 }, 00:10:09.729 { 00:10:09.729 "dma_device_id": "system", 00:10:09.729 "dma_device_type": 1 00:10:09.729 }, 00:10:09.729 { 00:10:09.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.729 "dma_device_type": 2 00:10:09.729 } 00:10:09.729 ], 00:10:09.729 "driver_specific": { 00:10:09.729 "raid": { 00:10:09.729 "uuid": "aa70fca1-b428-4b1e-9be6-e8f998510f69", 00:10:09.729 "strip_size_kb": 0, 00:10:09.729 "state": "online", 00:10:09.729 "raid_level": "raid1", 00:10:09.729 "superblock": true, 00:10:09.729 "num_base_bdevs": 3, 00:10:09.729 "num_base_bdevs_discovered": 3, 00:10:09.729 "num_base_bdevs_operational": 3, 00:10:09.729 "base_bdevs_list": [ 00:10:09.729 { 00:10:09.729 "name": "NewBaseBdev", 00:10:09.729 "uuid": "af20fabd-c5fd-4487-8518-6fdcde6e4a21", 00:10:09.729 "is_configured": true, 00:10:09.729 "data_offset": 2048, 00:10:09.729 "data_size": 63488 00:10:09.729 }, 00:10:09.729 { 00:10:09.729 "name": "BaseBdev2", 00:10:09.729 "uuid": "c2eb4884-44f5-4d22-8586-d00880d9f757", 00:10:09.729 "is_configured": true, 00:10:09.729 "data_offset": 2048, 00:10:09.729 "data_size": 63488 00:10:09.729 }, 00:10:09.729 { 00:10:09.729 "name": "BaseBdev3", 00:10:09.730 "uuid": "ab580804-47dd-4409-861f-1944022bf6d3", 00:10:09.730 "is_configured": true, 00:10:09.730 "data_offset": 2048, 00:10:09.730 "data_size": 63488 00:10:09.730 } 00:10:09.730 ] 00:10:09.730 } 00:10:09.730 } 00:10:09.730 }' 00:10:09.730 18:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.730 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:09.730 BaseBdev2 00:10:09.730 BaseBdev3' 00:10:09.730 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.730 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.730 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.730 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:09.730 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.730 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.730 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.987 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.988 [2024-11-26 18:57:01.237957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.988 [2024-11-26 18:57:01.238126] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.988 [2024-11-26 18:57:01.238355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.988 [2024-11-26 18:57:01.238877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.988 [2024-11-26 18:57:01.239076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68116 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68116 ']' 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68116 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68116 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68116' 00:10:09.988 killing process with pid 68116 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68116 00:10:09.988 [2024-11-26 18:57:01.279923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.988 18:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68116 00:10:10.245 [2024-11-26 18:57:01.562264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.619 ************************************ 00:10:11.619 END TEST raid_state_function_test_sb 00:10:11.619 ************************************ 00:10:11.619 18:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:11.619 00:10:11.619 real 0m11.837s 00:10:11.619 user 0m19.559s 00:10:11.619 sys 0m1.688s 00:10:11.619 18:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.619 18:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.619 18:57:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:11.619 18:57:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:11.619 18:57:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.619 18:57:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.619 ************************************ 00:10:11.619 START TEST raid_superblock_test 00:10:11.619 ************************************ 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68756 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68756 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68756 ']' 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.619 18:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.619 [2024-11-26 18:57:02.802556] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:10:11.619 [2024-11-26 18:57:02.802759] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68756 ] 00:10:11.877 [2024-11-26 18:57:02.994006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.877 [2024-11-26 18:57:03.155408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.135 [2024-11-26 18:57:03.391390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.135 [2024-11-26 18:57:03.391487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.701 malloc1 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.701 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.701 [2024-11-26 18:57:03.926826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:12.701 [2024-11-26 18:57:03.927077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.701 [2024-11-26 18:57:03.927241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:12.702 [2024-11-26 18:57:03.927394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.702 pt1 00:10:12.702 [2024-11-26 18:57:03.930787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.702 [2024-11-26 18:57:03.930839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.702 malloc2 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.702 [2024-11-26 18:57:03.979476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.702 [2024-11-26 18:57:03.979719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.702 [2024-11-26 18:57:03.979917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:12.702 [2024-11-26 18:57:03.980036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.702 pt2 00:10:12.702 [2024-11-26 18:57:03.983070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.702 [2024-11-26 18:57:03.983124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.702 18:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.702 malloc3 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.702 [2024-11-26 18:57:04.052005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.702 [2024-11-26 18:57:04.052085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.702 [2024-11-26 18:57:04.052125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:12.702 [2024-11-26 18:57:04.052144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.702 pt3 00:10:12.702 [2024-11-26 18:57:04.055711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.702 [2024-11-26 18:57:04.055770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.702 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.702 [2024-11-26 18:57:04.060183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:12.702 [2024-11-26 18:57:04.063402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.702 [2024-11-26 18:57:04.063706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.702 [2024-11-26 18:57:04.064237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:12.702 [2024-11-26 18:57:04.064425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.702 [2024-11-26 18:57:04.064875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:12.702 [2024-11-26 18:57:04.065334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:12.702 [2024-11-26 18:57:04.065504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:12.961 [2024-11-26 18:57:04.065914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.961 "name": "raid_bdev1", 00:10:12.961 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:12.961 "strip_size_kb": 0, 00:10:12.961 "state": "online", 00:10:12.961 "raid_level": "raid1", 00:10:12.961 "superblock": true, 00:10:12.961 "num_base_bdevs": 3, 00:10:12.961 "num_base_bdevs_discovered": 3, 00:10:12.961 "num_base_bdevs_operational": 3, 00:10:12.961 "base_bdevs_list": [ 00:10:12.961 { 00:10:12.961 "name": "pt1", 00:10:12.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.961 "is_configured": true, 00:10:12.961 "data_offset": 2048, 00:10:12.961 "data_size": 63488 00:10:12.961 }, 00:10:12.961 { 00:10:12.961 "name": "pt2", 00:10:12.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.961 "is_configured": true, 00:10:12.961 "data_offset": 2048, 00:10:12.961 "data_size": 63488 00:10:12.961 }, 00:10:12.961 { 00:10:12.961 "name": "pt3", 00:10:12.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.961 "is_configured": true, 00:10:12.961 "data_offset": 2048, 00:10:12.961 "data_size": 63488 00:10:12.961 } 00:10:12.961 ] 00:10:12.961 }' 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.961 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.529 [2024-11-26 18:57:04.608875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.529 "name": "raid_bdev1", 00:10:13.529 "aliases": [ 00:10:13.529 "1a753495-e45b-43d1-afa4-07b1ccf5399b" 00:10:13.529 ], 00:10:13.529 "product_name": "Raid Volume", 00:10:13.529 "block_size": 512, 00:10:13.529 "num_blocks": 63488, 00:10:13.529 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:13.529 "assigned_rate_limits": { 00:10:13.529 "rw_ios_per_sec": 0, 00:10:13.529 "rw_mbytes_per_sec": 0, 00:10:13.529 "r_mbytes_per_sec": 0, 00:10:13.529 "w_mbytes_per_sec": 0 00:10:13.529 }, 00:10:13.529 "claimed": false, 00:10:13.529 "zoned": false, 00:10:13.529 "supported_io_types": { 00:10:13.529 "read": true, 00:10:13.529 "write": true, 00:10:13.529 "unmap": false, 00:10:13.529 "flush": false, 00:10:13.529 "reset": true, 00:10:13.529 "nvme_admin": false, 00:10:13.529 "nvme_io": false, 00:10:13.529 "nvme_io_md": false, 00:10:13.529 "write_zeroes": true, 00:10:13.529 "zcopy": false, 00:10:13.529 "get_zone_info": false, 00:10:13.529 "zone_management": false, 00:10:13.529 "zone_append": false, 00:10:13.529 "compare": false, 00:10:13.529 "compare_and_write": false, 00:10:13.529 "abort": false, 00:10:13.529 "seek_hole": false, 00:10:13.529 "seek_data": false, 00:10:13.529 "copy": false, 00:10:13.529 "nvme_iov_md": false 00:10:13.529 }, 00:10:13.529 "memory_domains": [ 00:10:13.529 { 00:10:13.529 "dma_device_id": "system", 00:10:13.529 "dma_device_type": 1 00:10:13.529 }, 00:10:13.529 { 00:10:13.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.529 "dma_device_type": 2 00:10:13.529 }, 00:10:13.529 { 00:10:13.529 "dma_device_id": "system", 00:10:13.529 "dma_device_type": 1 00:10:13.529 }, 00:10:13.529 { 00:10:13.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.529 "dma_device_type": 2 00:10:13.529 }, 00:10:13.529 { 00:10:13.529 "dma_device_id": "system", 00:10:13.529 "dma_device_type": 1 00:10:13.529 }, 00:10:13.529 { 00:10:13.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.529 "dma_device_type": 2 00:10:13.529 } 00:10:13.529 ], 00:10:13.529 "driver_specific": { 00:10:13.529 "raid": { 00:10:13.529 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:13.529 "strip_size_kb": 0, 00:10:13.529 "state": "online", 00:10:13.529 "raid_level": "raid1", 00:10:13.529 "superblock": true, 00:10:13.529 "num_base_bdevs": 3, 00:10:13.529 "num_base_bdevs_discovered": 3, 00:10:13.529 "num_base_bdevs_operational": 3, 00:10:13.529 "base_bdevs_list": [ 00:10:13.529 { 00:10:13.529 "name": "pt1", 00:10:13.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.529 "is_configured": true, 00:10:13.529 "data_offset": 2048, 00:10:13.529 "data_size": 63488 00:10:13.529 }, 00:10:13.529 { 00:10:13.529 "name": "pt2", 00:10:13.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.529 "is_configured": true, 00:10:13.529 "data_offset": 2048, 00:10:13.529 "data_size": 63488 00:10:13.529 }, 00:10:13.529 { 00:10:13.529 "name": "pt3", 00:10:13.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.529 "is_configured": true, 00:10:13.529 "data_offset": 2048, 00:10:13.529 "data_size": 63488 00:10:13.529 } 00:10:13.529 ] 00:10:13.529 } 00:10:13.529 } 00:10:13.529 }' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.529 pt2 00:10:13.529 pt3' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.529 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.788 [2024-11-26 18:57:04.940760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1a753495-e45b-43d1-afa4-07b1ccf5399b 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1a753495-e45b-43d1-afa4-07b1ccf5399b ']' 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.788 [2024-11-26 18:57:04.984446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.788 [2024-11-26 18:57:04.984645] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.788 [2024-11-26 18:57:04.984864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.788 [2024-11-26 18:57:04.985156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.788 [2024-11-26 18:57:04.985184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.788 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.789 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.789 18:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:13.789 18:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.789 [2024-11-26 18:57:05.136565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:13.789 [2024-11-26 18:57:05.139494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:13.789 [2024-11-26 18:57:05.139579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:13.789 [2024-11-26 18:57:05.139670] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:13.789 [2024-11-26 18:57:05.139747] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:13.789 [2024-11-26 18:57:05.139783] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:13.789 [2024-11-26 18:57:05.139832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.789 [2024-11-26 18:57:05.139846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:13.789 request: 00:10:13.789 { 00:10:13.789 "name": "raid_bdev1", 00:10:13.789 "raid_level": "raid1", 00:10:13.789 "base_bdevs": [ 00:10:13.789 "malloc1", 00:10:13.789 "malloc2", 00:10:13.789 "malloc3" 00:10:13.789 ], 00:10:13.789 "superblock": false, 00:10:13.789 "method": "bdev_raid_create", 00:10:13.789 "req_id": 1 00:10:13.789 } 00:10:13.789 Got JSON-RPC error response 00:10:13.789 response: 00:10:13.789 { 00:10:13.789 "code": -17, 00:10:13.789 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:13.789 } 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.789 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.048 [2024-11-26 18:57:05.196545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.048 [2024-11-26 18:57:05.196772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.048 [2024-11-26 18:57:05.196845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:14.048 [2024-11-26 18:57:05.196993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.048 [2024-11-26 18:57:05.200368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.048 [2024-11-26 18:57:05.200580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.048 [2024-11-26 18:57:05.200812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:14.048 [2024-11-26 18:57:05.201001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:14.048 pt1 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.048 "name": "raid_bdev1", 00:10:14.048 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:14.048 "strip_size_kb": 0, 00:10:14.048 "state": "configuring", 00:10:14.048 "raid_level": "raid1", 00:10:14.048 "superblock": true, 00:10:14.048 "num_base_bdevs": 3, 00:10:14.048 "num_base_bdevs_discovered": 1, 00:10:14.048 "num_base_bdevs_operational": 3, 00:10:14.048 "base_bdevs_list": [ 00:10:14.048 { 00:10:14.048 "name": "pt1", 00:10:14.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.048 "is_configured": true, 00:10:14.048 "data_offset": 2048, 00:10:14.048 "data_size": 63488 00:10:14.048 }, 00:10:14.048 { 00:10:14.048 "name": null, 00:10:14.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.048 "is_configured": false, 00:10:14.048 "data_offset": 2048, 00:10:14.048 "data_size": 63488 00:10:14.048 }, 00:10:14.048 { 00:10:14.048 "name": null, 00:10:14.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.048 "is_configured": false, 00:10:14.048 "data_offset": 2048, 00:10:14.048 "data_size": 63488 00:10:14.048 } 00:10:14.048 ] 00:10:14.048 }' 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.048 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.619 [2024-11-26 18:57:05.733084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.619 [2024-11-26 18:57:05.733301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.619 [2024-11-26 18:57:05.733351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:14.619 [2024-11-26 18:57:05.733368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.619 [2024-11-26 18:57:05.733970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.619 [2024-11-26 18:57:05.734006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.619 [2024-11-26 18:57:05.734126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.619 [2024-11-26 18:57:05.734160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.619 pt2 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.619 [2024-11-26 18:57:05.741068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.619 "name": "raid_bdev1", 00:10:14.619 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:14.619 "strip_size_kb": 0, 00:10:14.619 "state": "configuring", 00:10:14.619 "raid_level": "raid1", 00:10:14.619 "superblock": true, 00:10:14.619 "num_base_bdevs": 3, 00:10:14.619 "num_base_bdevs_discovered": 1, 00:10:14.619 "num_base_bdevs_operational": 3, 00:10:14.619 "base_bdevs_list": [ 00:10:14.619 { 00:10:14.619 "name": "pt1", 00:10:14.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.619 "is_configured": true, 00:10:14.619 "data_offset": 2048, 00:10:14.619 "data_size": 63488 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "name": null, 00:10:14.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.619 "is_configured": false, 00:10:14.619 "data_offset": 0, 00:10:14.619 "data_size": 63488 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "name": null, 00:10:14.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.619 "is_configured": false, 00:10:14.619 "data_offset": 2048, 00:10:14.619 "data_size": 63488 00:10:14.619 } 00:10:14.619 ] 00:10:14.619 }' 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.619 18:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.186 [2024-11-26 18:57:06.269230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:15.186 [2024-11-26 18:57:06.269494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.186 [2024-11-26 18:57:06.269535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:15.186 [2024-11-26 18:57:06.269553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.186 [2024-11-26 18:57:06.270201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.186 [2024-11-26 18:57:06.270232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:15.186 [2024-11-26 18:57:06.270345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:15.186 [2024-11-26 18:57:06.270396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.186 pt2 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.186 [2024-11-26 18:57:06.277207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.186 [2024-11-26 18:57:06.277438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.186 [2024-11-26 18:57:06.277504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:15.186 [2024-11-26 18:57:06.277633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.186 [2024-11-26 18:57:06.278185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.186 [2024-11-26 18:57:06.278360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.186 [2024-11-26 18:57:06.278567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:15.186 [2024-11-26 18:57:06.278723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.186 [2024-11-26 18:57:06.278953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.186 [2024-11-26 18:57:06.279083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.186 [2024-11-26 18:57:06.279532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:15.186 [2024-11-26 18:57:06.279763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.186 [2024-11-26 18:57:06.279779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:15.186 [2024-11-26 18:57:06.279989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.186 pt3 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.186 "name": "raid_bdev1", 00:10:15.186 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:15.186 "strip_size_kb": 0, 00:10:15.186 "state": "online", 00:10:15.186 "raid_level": "raid1", 00:10:15.186 "superblock": true, 00:10:15.186 "num_base_bdevs": 3, 00:10:15.186 "num_base_bdevs_discovered": 3, 00:10:15.186 "num_base_bdevs_operational": 3, 00:10:15.186 "base_bdevs_list": [ 00:10:15.186 { 00:10:15.186 "name": "pt1", 00:10:15.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.186 "is_configured": true, 00:10:15.186 "data_offset": 2048, 00:10:15.186 "data_size": 63488 00:10:15.186 }, 00:10:15.186 { 00:10:15.186 "name": "pt2", 00:10:15.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.186 "is_configured": true, 00:10:15.186 "data_offset": 2048, 00:10:15.186 "data_size": 63488 00:10:15.186 }, 00:10:15.186 { 00:10:15.186 "name": "pt3", 00:10:15.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.186 "is_configured": true, 00:10:15.186 "data_offset": 2048, 00:10:15.186 "data_size": 63488 00:10:15.186 } 00:10:15.186 ] 00:10:15.186 }' 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.186 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.445 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.445 [2024-11-26 18:57:06.809760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.706 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.706 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.706 "name": "raid_bdev1", 00:10:15.706 "aliases": [ 00:10:15.706 "1a753495-e45b-43d1-afa4-07b1ccf5399b" 00:10:15.706 ], 00:10:15.706 "product_name": "Raid Volume", 00:10:15.706 "block_size": 512, 00:10:15.706 "num_blocks": 63488, 00:10:15.706 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:15.706 "assigned_rate_limits": { 00:10:15.706 "rw_ios_per_sec": 0, 00:10:15.706 "rw_mbytes_per_sec": 0, 00:10:15.706 "r_mbytes_per_sec": 0, 00:10:15.706 "w_mbytes_per_sec": 0 00:10:15.706 }, 00:10:15.706 "claimed": false, 00:10:15.706 "zoned": false, 00:10:15.706 "supported_io_types": { 00:10:15.706 "read": true, 00:10:15.706 "write": true, 00:10:15.706 "unmap": false, 00:10:15.706 "flush": false, 00:10:15.706 "reset": true, 00:10:15.706 "nvme_admin": false, 00:10:15.706 "nvme_io": false, 00:10:15.706 "nvme_io_md": false, 00:10:15.706 "write_zeroes": true, 00:10:15.706 "zcopy": false, 00:10:15.706 "get_zone_info": false, 00:10:15.706 "zone_management": false, 00:10:15.706 "zone_append": false, 00:10:15.706 "compare": false, 00:10:15.706 "compare_and_write": false, 00:10:15.706 "abort": false, 00:10:15.706 "seek_hole": false, 00:10:15.706 "seek_data": false, 00:10:15.706 "copy": false, 00:10:15.706 "nvme_iov_md": false 00:10:15.706 }, 00:10:15.706 "memory_domains": [ 00:10:15.706 { 00:10:15.706 "dma_device_id": "system", 00:10:15.706 "dma_device_type": 1 00:10:15.706 }, 00:10:15.706 { 00:10:15.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.706 "dma_device_type": 2 00:10:15.706 }, 00:10:15.706 { 00:10:15.706 "dma_device_id": "system", 00:10:15.706 "dma_device_type": 1 00:10:15.706 }, 00:10:15.706 { 00:10:15.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.706 "dma_device_type": 2 00:10:15.706 }, 00:10:15.706 { 00:10:15.706 "dma_device_id": "system", 00:10:15.706 "dma_device_type": 1 00:10:15.706 }, 00:10:15.706 { 00:10:15.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.707 "dma_device_type": 2 00:10:15.707 } 00:10:15.707 ], 00:10:15.707 "driver_specific": { 00:10:15.707 "raid": { 00:10:15.707 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:15.707 "strip_size_kb": 0, 00:10:15.707 "state": "online", 00:10:15.707 "raid_level": "raid1", 00:10:15.707 "superblock": true, 00:10:15.707 "num_base_bdevs": 3, 00:10:15.707 "num_base_bdevs_discovered": 3, 00:10:15.707 "num_base_bdevs_operational": 3, 00:10:15.707 "base_bdevs_list": [ 00:10:15.707 { 00:10:15.707 "name": "pt1", 00:10:15.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.707 "is_configured": true, 00:10:15.707 "data_offset": 2048, 00:10:15.707 "data_size": 63488 00:10:15.707 }, 00:10:15.707 { 00:10:15.707 "name": "pt2", 00:10:15.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.707 "is_configured": true, 00:10:15.707 "data_offset": 2048, 00:10:15.707 "data_size": 63488 00:10:15.707 }, 00:10:15.707 { 00:10:15.707 "name": "pt3", 00:10:15.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.707 "is_configured": true, 00:10:15.707 "data_offset": 2048, 00:10:15.707 "data_size": 63488 00:10:15.707 } 00:10:15.707 ] 00:10:15.707 } 00:10:15.707 } 00:10:15.707 }' 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:15.707 pt2 00:10:15.707 pt3' 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.707 18:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.707 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:15.969 [2024-11-26 18:57:07.129831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1a753495-e45b-43d1-afa4-07b1ccf5399b '!=' 1a753495-e45b-43d1-afa4-07b1ccf5399b ']' 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.969 [2024-11-26 18:57:07.181543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.969 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.970 "name": "raid_bdev1", 00:10:15.970 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:15.970 "strip_size_kb": 0, 00:10:15.970 "state": "online", 00:10:15.970 "raid_level": "raid1", 00:10:15.970 "superblock": true, 00:10:15.970 "num_base_bdevs": 3, 00:10:15.970 "num_base_bdevs_discovered": 2, 00:10:15.970 "num_base_bdevs_operational": 2, 00:10:15.970 "base_bdevs_list": [ 00:10:15.970 { 00:10:15.970 "name": null, 00:10:15.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.970 "is_configured": false, 00:10:15.970 "data_offset": 0, 00:10:15.970 "data_size": 63488 00:10:15.970 }, 00:10:15.970 { 00:10:15.970 "name": "pt2", 00:10:15.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.970 "is_configured": true, 00:10:15.970 "data_offset": 2048, 00:10:15.970 "data_size": 63488 00:10:15.970 }, 00:10:15.970 { 00:10:15.970 "name": "pt3", 00:10:15.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.970 "is_configured": true, 00:10:15.970 "data_offset": 2048, 00:10:15.970 "data_size": 63488 00:10:15.970 } 00:10:15.970 ] 00:10:15.970 }' 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.970 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.539 [2024-11-26 18:57:07.669607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.539 [2024-11-26 18:57:07.669769] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.539 [2024-11-26 18:57:07.669914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.539 [2024-11-26 18:57:07.670009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.539 [2024-11-26 18:57:07.670032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.539 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.540 [2024-11-26 18:57:07.749578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.540 [2024-11-26 18:57:07.749829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.540 [2024-11-26 18:57:07.749873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:16.540 [2024-11-26 18:57:07.749891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.540 [2024-11-26 18:57:07.752897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.540 [2024-11-26 18:57:07.752970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.540 [2024-11-26 18:57:07.753068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:16.540 [2024-11-26 18:57:07.753134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.540 pt2 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.540 "name": "raid_bdev1", 00:10:16.540 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:16.540 "strip_size_kb": 0, 00:10:16.540 "state": "configuring", 00:10:16.540 "raid_level": "raid1", 00:10:16.540 "superblock": true, 00:10:16.540 "num_base_bdevs": 3, 00:10:16.540 "num_base_bdevs_discovered": 1, 00:10:16.540 "num_base_bdevs_operational": 2, 00:10:16.540 "base_bdevs_list": [ 00:10:16.540 { 00:10:16.540 "name": null, 00:10:16.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.540 "is_configured": false, 00:10:16.540 "data_offset": 2048, 00:10:16.540 "data_size": 63488 00:10:16.540 }, 00:10:16.540 { 00:10:16.540 "name": "pt2", 00:10:16.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.540 "is_configured": true, 00:10:16.540 "data_offset": 2048, 00:10:16.540 "data_size": 63488 00:10:16.540 }, 00:10:16.540 { 00:10:16.540 "name": null, 00:10:16.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.540 "is_configured": false, 00:10:16.540 "data_offset": 2048, 00:10:16.540 "data_size": 63488 00:10:16.540 } 00:10:16.540 ] 00:10:16.540 }' 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.540 18:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.108 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:17.108 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:17.108 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.109 [2024-11-26 18:57:08.273823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.109 [2024-11-26 18:57:08.274096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.109 [2024-11-26 18:57:08.274173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:17.109 [2024-11-26 18:57:08.274317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.109 [2024-11-26 18:57:08.274981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.109 [2024-11-26 18:57:08.275152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.109 [2024-11-26 18:57:08.275300] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:17.109 [2024-11-26 18:57:08.275346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.109 [2024-11-26 18:57:08.275512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:17.109 [2024-11-26 18:57:08.275534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.109 [2024-11-26 18:57:08.275883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:17.109 [2024-11-26 18:57:08.276135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:17.109 [2024-11-26 18:57:08.276152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:17.109 [2024-11-26 18:57:08.276329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.109 pt3 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.109 "name": "raid_bdev1", 00:10:17.109 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:17.109 "strip_size_kb": 0, 00:10:17.109 "state": "online", 00:10:17.109 "raid_level": "raid1", 00:10:17.109 "superblock": true, 00:10:17.109 "num_base_bdevs": 3, 00:10:17.109 "num_base_bdevs_discovered": 2, 00:10:17.109 "num_base_bdevs_operational": 2, 00:10:17.109 "base_bdevs_list": [ 00:10:17.109 { 00:10:17.109 "name": null, 00:10:17.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.109 "is_configured": false, 00:10:17.109 "data_offset": 2048, 00:10:17.109 "data_size": 63488 00:10:17.109 }, 00:10:17.109 { 00:10:17.109 "name": "pt2", 00:10:17.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.109 "is_configured": true, 00:10:17.109 "data_offset": 2048, 00:10:17.109 "data_size": 63488 00:10:17.109 }, 00:10:17.109 { 00:10:17.109 "name": "pt3", 00:10:17.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.109 "is_configured": true, 00:10:17.109 "data_offset": 2048, 00:10:17.109 "data_size": 63488 00:10:17.109 } 00:10:17.109 ] 00:10:17.109 }' 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.109 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.676 [2024-11-26 18:57:08.805880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.676 [2024-11-26 18:57:08.805951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.676 [2024-11-26 18:57:08.806070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.676 [2024-11-26 18:57:08.806158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.676 [2024-11-26 18:57:08.806173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.676 [2024-11-26 18:57:08.877925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.676 [2024-11-26 18:57:08.878004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.676 [2024-11-26 18:57:08.878033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:17.676 [2024-11-26 18:57:08.878046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.676 [2024-11-26 18:57:08.881319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.676 [2024-11-26 18:57:08.881377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.676 [2024-11-26 18:57:08.881493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:17.676 [2024-11-26 18:57:08.881555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.676 [2024-11-26 18:57:08.881746] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:17.676 [2024-11-26 18:57:08.881802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.676 [2024-11-26 18:57:08.881826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:17.676 [2024-11-26 18:57:08.881927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.676 pt1 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.676 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.677 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.677 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.677 "name": "raid_bdev1", 00:10:17.677 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:17.677 "strip_size_kb": 0, 00:10:17.677 "state": "configuring", 00:10:17.677 "raid_level": "raid1", 00:10:17.677 "superblock": true, 00:10:17.677 "num_base_bdevs": 3, 00:10:17.677 "num_base_bdevs_discovered": 1, 00:10:17.677 "num_base_bdevs_operational": 2, 00:10:17.677 "base_bdevs_list": [ 00:10:17.677 { 00:10:17.677 "name": null, 00:10:17.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.677 "is_configured": false, 00:10:17.677 "data_offset": 2048, 00:10:17.677 "data_size": 63488 00:10:17.677 }, 00:10:17.677 { 00:10:17.677 "name": "pt2", 00:10:17.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.677 "is_configured": true, 00:10:17.677 "data_offset": 2048, 00:10:17.677 "data_size": 63488 00:10:17.677 }, 00:10:17.677 { 00:10:17.677 "name": null, 00:10:17.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.677 "is_configured": false, 00:10:17.677 "data_offset": 2048, 00:10:17.677 "data_size": 63488 00:10:17.677 } 00:10:17.677 ] 00:10:17.677 }' 00:10:17.677 18:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.677 18:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.243 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.244 [2024-11-26 18:57:09.474414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.244 [2024-11-26 18:57:09.474671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.244 [2024-11-26 18:57:09.474717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:18.244 [2024-11-26 18:57:09.474733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.244 [2024-11-26 18:57:09.475425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.244 [2024-11-26 18:57:09.475457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.244 [2024-11-26 18:57:09.475573] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:18.244 [2024-11-26 18:57:09.475606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.244 [2024-11-26 18:57:09.475768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:18.244 [2024-11-26 18:57:09.475791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.244 [2024-11-26 18:57:09.476133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:18.244 [2024-11-26 18:57:09.476333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:18.244 [2024-11-26 18:57:09.476364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:18.244 [2024-11-26 18:57:09.476543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.244 pt3 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.244 "name": "raid_bdev1", 00:10:18.244 "uuid": "1a753495-e45b-43d1-afa4-07b1ccf5399b", 00:10:18.244 "strip_size_kb": 0, 00:10:18.244 "state": "online", 00:10:18.244 "raid_level": "raid1", 00:10:18.244 "superblock": true, 00:10:18.244 "num_base_bdevs": 3, 00:10:18.244 "num_base_bdevs_discovered": 2, 00:10:18.244 "num_base_bdevs_operational": 2, 00:10:18.244 "base_bdevs_list": [ 00:10:18.244 { 00:10:18.244 "name": null, 00:10:18.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.244 "is_configured": false, 00:10:18.244 "data_offset": 2048, 00:10:18.244 "data_size": 63488 00:10:18.244 }, 00:10:18.244 { 00:10:18.244 "name": "pt2", 00:10:18.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.244 "is_configured": true, 00:10:18.244 "data_offset": 2048, 00:10:18.244 "data_size": 63488 00:10:18.244 }, 00:10:18.244 { 00:10:18.244 "name": "pt3", 00:10:18.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.244 "is_configured": true, 00:10:18.244 "data_offset": 2048, 00:10:18.244 "data_size": 63488 00:10:18.244 } 00:10:18.244 ] 00:10:18.244 }' 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.244 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.813 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:18.813 18:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:18.813 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.813 18:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.813 [2024-11-26 18:57:10.066831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1a753495-e45b-43d1-afa4-07b1ccf5399b '!=' 1a753495-e45b-43d1-afa4-07b1ccf5399b ']' 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68756 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68756 ']' 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68756 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68756 00:10:18.813 killing process with pid 68756 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68756' 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68756 00:10:18.813 [2024-11-26 18:57:10.145867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.813 [2024-11-26 18:57:10.145993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.813 18:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68756 00:10:18.813 [2024-11-26 18:57:10.146077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.813 [2024-11-26 18:57:10.146097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:19.073 [2024-11-26 18:57:10.424152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.451 ************************************ 00:10:20.451 END TEST raid_superblock_test 00:10:20.451 ************************************ 00:10:20.451 18:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:20.451 00:10:20.451 real 0m8.779s 00:10:20.451 user 0m14.358s 00:10:20.451 sys 0m1.263s 00:10:20.451 18:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.451 18:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.451 18:57:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:20.451 18:57:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:20.451 18:57:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.451 18:57:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.451 ************************************ 00:10:20.451 START TEST raid_read_error_test 00:10:20.451 ************************************ 00:10:20.451 18:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:20.451 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:20.451 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:20.451 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:20.451 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:20.451 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.451 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:20.451 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.451 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:20.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qQow6ttOoP 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69207 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69207 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69207 ']' 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.452 18:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.452 [2024-11-26 18:57:11.668312] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:10:20.452 [2024-11-26 18:57:11.668814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69207 ] 00:10:20.711 [2024-11-26 18:57:11.865704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.711 [2024-11-26 18:57:11.995609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.970 [2024-11-26 18:57:12.202217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.970 [2024-11-26 18:57:12.202513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.537 BaseBdev1_malloc 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.537 true 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.537 [2024-11-26 18:57:12.716964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:21.537 [2024-11-26 18:57:12.717173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.537 [2024-11-26 18:57:12.717258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:21.537 [2024-11-26 18:57:12.717440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.537 [2024-11-26 18:57:12.720395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.537 BaseBdev1 00:10:21.537 [2024-11-26 18:57:12.720572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.537 BaseBdev2_malloc 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.537 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.538 true 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.538 [2024-11-26 18:57:12.778395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:21.538 [2024-11-26 18:57:12.778496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.538 [2024-11-26 18:57:12.778531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:21.538 [2024-11-26 18:57:12.778549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.538 [2024-11-26 18:57:12.781723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.538 [2024-11-26 18:57:12.781777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:21.538 BaseBdev2 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.538 BaseBdev3_malloc 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.538 true 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.538 [2024-11-26 18:57:12.860239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:21.538 [2024-11-26 18:57:12.860321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.538 [2024-11-26 18:57:12.860355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:21.538 [2024-11-26 18:57:12.860374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.538 [2024-11-26 18:57:12.863589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.538 [2024-11-26 18:57:12.863657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:21.538 BaseBdev3 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.538 [2024-11-26 18:57:12.872360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.538 [2024-11-26 18:57:12.875210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.538 [2024-11-26 18:57:12.875335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.538 [2024-11-26 18:57:12.875671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:21.538 [2024-11-26 18:57:12.875693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.538 [2024-11-26 18:57:12.876075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:21.538 [2024-11-26 18:57:12.876328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:21.538 [2024-11-26 18:57:12.876350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:21.538 [2024-11-26 18:57:12.876636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.538 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.796 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.796 "name": "raid_bdev1", 00:10:21.796 "uuid": "213fe300-b178-4d20-b0d7-98ebd7d0ca52", 00:10:21.796 "strip_size_kb": 0, 00:10:21.796 "state": "online", 00:10:21.796 "raid_level": "raid1", 00:10:21.796 "superblock": true, 00:10:21.796 "num_base_bdevs": 3, 00:10:21.796 "num_base_bdevs_discovered": 3, 00:10:21.796 "num_base_bdevs_operational": 3, 00:10:21.796 "base_bdevs_list": [ 00:10:21.796 { 00:10:21.796 "name": "BaseBdev1", 00:10:21.796 "uuid": "87aad2c4-7e3c-585b-b26d-f09f5d10e79f", 00:10:21.796 "is_configured": true, 00:10:21.796 "data_offset": 2048, 00:10:21.796 "data_size": 63488 00:10:21.796 }, 00:10:21.796 { 00:10:21.796 "name": "BaseBdev2", 00:10:21.796 "uuid": "c220d8be-e97c-5c14-acd4-069cde52d80b", 00:10:21.796 "is_configured": true, 00:10:21.796 "data_offset": 2048, 00:10:21.796 "data_size": 63488 00:10:21.796 }, 00:10:21.796 { 00:10:21.796 "name": "BaseBdev3", 00:10:21.796 "uuid": "80ae72c6-dc91-5856-a3d4-9526c3197a7d", 00:10:21.796 "is_configured": true, 00:10:21.796 "data_offset": 2048, 00:10:21.796 "data_size": 63488 00:10:21.796 } 00:10:21.796 ] 00:10:21.796 }' 00:10:21.796 18:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.797 18:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 18:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:22.055 18:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:22.313 [2024-11-26 18:57:13.502233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.251 "name": "raid_bdev1", 00:10:23.251 "uuid": "213fe300-b178-4d20-b0d7-98ebd7d0ca52", 00:10:23.251 "strip_size_kb": 0, 00:10:23.251 "state": "online", 00:10:23.251 "raid_level": "raid1", 00:10:23.251 "superblock": true, 00:10:23.251 "num_base_bdevs": 3, 00:10:23.251 "num_base_bdevs_discovered": 3, 00:10:23.251 "num_base_bdevs_operational": 3, 00:10:23.251 "base_bdevs_list": [ 00:10:23.251 { 00:10:23.251 "name": "BaseBdev1", 00:10:23.251 "uuid": "87aad2c4-7e3c-585b-b26d-f09f5d10e79f", 00:10:23.251 "is_configured": true, 00:10:23.251 "data_offset": 2048, 00:10:23.251 "data_size": 63488 00:10:23.251 }, 00:10:23.251 { 00:10:23.251 "name": "BaseBdev2", 00:10:23.251 "uuid": "c220d8be-e97c-5c14-acd4-069cde52d80b", 00:10:23.251 "is_configured": true, 00:10:23.251 "data_offset": 2048, 00:10:23.251 "data_size": 63488 00:10:23.251 }, 00:10:23.251 { 00:10:23.251 "name": "BaseBdev3", 00:10:23.251 "uuid": "80ae72c6-dc91-5856-a3d4-9526c3197a7d", 00:10:23.251 "is_configured": true, 00:10:23.251 "data_offset": 2048, 00:10:23.251 "data_size": 63488 00:10:23.251 } 00:10:23.251 ] 00:10:23.251 }' 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.251 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.553 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.553 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.553 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.812 [2024-11-26 18:57:14.919329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.812 [2024-11-26 18:57:14.919366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.812 { 00:10:23.812 "results": [ 00:10:23.812 { 00:10:23.812 "job": "raid_bdev1", 00:10:23.812 "core_mask": "0x1", 00:10:23.812 "workload": "randrw", 00:10:23.812 "percentage": 50, 00:10:23.812 "status": "finished", 00:10:23.812 "queue_depth": 1, 00:10:23.812 "io_size": 131072, 00:10:23.812 "runtime": 1.414562, 00:10:23.812 "iops": 9158.311901493184, 00:10:23.812 "mibps": 1144.788987686648, 00:10:23.812 "io_failed": 0, 00:10:23.812 "io_timeout": 0, 00:10:23.812 "avg_latency_us": 104.87881042770428, 00:10:23.812 "min_latency_us": 40.72727272727273, 00:10:23.812 "max_latency_us": 1906.5018181818182 00:10:23.812 } 00:10:23.812 ], 00:10:23.812 "core_count": 1 00:10:23.812 } 00:10:23.812 [2024-11-26 18:57:14.923088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.812 [2024-11-26 18:57:14.923166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.812 [2024-11-26 18:57:14.923396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.812 [2024-11-26 18:57:14.923417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:23.812 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.812 18:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69207 00:10:23.812 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69207 ']' 00:10:23.812 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69207 00:10:23.812 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:23.812 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.812 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69207 00:10:23.812 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.813 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.813 killing process with pid 69207 00:10:23.813 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69207' 00:10:23.813 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69207 00:10:23.813 [2024-11-26 18:57:14.964779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.813 18:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69207 00:10:23.813 [2024-11-26 18:57:15.171642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.194 18:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qQow6ttOoP 00:10:25.194 18:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.194 18:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.194 18:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:25.194 18:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:25.194 18:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.194 18:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.194 ************************************ 00:10:25.194 END TEST raid_read_error_test 00:10:25.194 ************************************ 00:10:25.194 18:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:25.194 00:10:25.194 real 0m4.777s 00:10:25.195 user 0m5.894s 00:10:25.195 sys 0m0.628s 00:10:25.195 18:57:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.195 18:57:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.195 18:57:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:25.195 18:57:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.195 18:57:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.195 18:57:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.195 ************************************ 00:10:25.195 START TEST raid_write_error_test 00:10:25.195 ************************************ 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wN34ZWOArC 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69353 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69353 00:10:25.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69353 ']' 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.195 18:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.195 [2024-11-26 18:57:16.480104] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:10:25.195 [2024-11-26 18:57:16.480305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69353 ] 00:10:25.454 [2024-11-26 18:57:16.669671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.454 [2024-11-26 18:57:16.803067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.712 [2024-11-26 18:57:17.012187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.712 [2024-11-26 18:57:17.012426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.281 BaseBdev1_malloc 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.281 true 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.281 [2024-11-26 18:57:17.580159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.281 [2024-11-26 18:57:17.580229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.281 [2024-11-26 18:57:17.580260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:26.281 [2024-11-26 18:57:17.580279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.281 [2024-11-26 18:57:17.583129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.281 [2024-11-26 18:57:17.583181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.281 BaseBdev1 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.281 BaseBdev2_malloc 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.281 true 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.281 [2024-11-26 18:57:17.636681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:26.281 [2024-11-26 18:57:17.636911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.281 [2024-11-26 18:57:17.636949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:26.281 [2024-11-26 18:57:17.636970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.281 [2024-11-26 18:57:17.639832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.281 [2024-11-26 18:57:17.639884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:26.281 BaseBdev2 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.281 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.545 BaseBdev3_malloc 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.545 true 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.545 [2024-11-26 18:57:17.710175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:26.545 [2024-11-26 18:57:17.710405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.545 [2024-11-26 18:57:17.710477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:26.545 [2024-11-26 18:57:17.710642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.545 [2024-11-26 18:57:17.713665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.545 [2024-11-26 18:57:17.713716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:26.545 BaseBdev3 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.545 [2024-11-26 18:57:17.718419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.545 [2024-11-26 18:57:17.721111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.545 [2024-11-26 18:57:17.721391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.545 [2024-11-26 18:57:17.721810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:26.545 [2024-11-26 18:57:17.721836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.545 [2024-11-26 18:57:17.722181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:26.545 [2024-11-26 18:57:17.722418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:26.545 [2024-11-26 18:57:17.722437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:26.545 [2024-11-26 18:57:17.722689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.545 "name": "raid_bdev1", 00:10:26.545 "uuid": "192729d5-c5ac-4239-9d68-377753cedebe", 00:10:26.545 "strip_size_kb": 0, 00:10:26.545 "state": "online", 00:10:26.545 "raid_level": "raid1", 00:10:26.545 "superblock": true, 00:10:26.545 "num_base_bdevs": 3, 00:10:26.545 "num_base_bdevs_discovered": 3, 00:10:26.545 "num_base_bdevs_operational": 3, 00:10:26.545 "base_bdevs_list": [ 00:10:26.545 { 00:10:26.545 "name": "BaseBdev1", 00:10:26.545 "uuid": "156a828a-7dfb-56f1-898d-8317d69f842d", 00:10:26.545 "is_configured": true, 00:10:26.545 "data_offset": 2048, 00:10:26.545 "data_size": 63488 00:10:26.545 }, 00:10:26.545 { 00:10:26.545 "name": "BaseBdev2", 00:10:26.545 "uuid": "604cd43d-3ea4-58bb-bab2-01f857fa8a4f", 00:10:26.545 "is_configured": true, 00:10:26.545 "data_offset": 2048, 00:10:26.545 "data_size": 63488 00:10:26.545 }, 00:10:26.545 { 00:10:26.545 "name": "BaseBdev3", 00:10:26.545 "uuid": "8bb5f1a9-5751-5374-a497-ac62b964069b", 00:10:26.545 "is_configured": true, 00:10:26.545 "data_offset": 2048, 00:10:26.545 "data_size": 63488 00:10:26.545 } 00:10:26.545 ] 00:10:26.545 }' 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.545 18:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.130 18:57:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.130 18:57:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:27.130 [2024-11-26 18:57:18.348349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.101 [2024-11-26 18:57:19.229095] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:28.101 [2024-11-26 18:57:19.229159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.101 [2024-11-26 18:57:19.229429] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.101 "name": "raid_bdev1", 00:10:28.101 "uuid": "192729d5-c5ac-4239-9d68-377753cedebe", 00:10:28.101 "strip_size_kb": 0, 00:10:28.101 "state": "online", 00:10:28.101 "raid_level": "raid1", 00:10:28.101 "superblock": true, 00:10:28.101 "num_base_bdevs": 3, 00:10:28.101 "num_base_bdevs_discovered": 2, 00:10:28.101 "num_base_bdevs_operational": 2, 00:10:28.101 "base_bdevs_list": [ 00:10:28.101 { 00:10:28.101 "name": null, 00:10:28.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.101 "is_configured": false, 00:10:28.101 "data_offset": 0, 00:10:28.101 "data_size": 63488 00:10:28.101 }, 00:10:28.101 { 00:10:28.101 "name": "BaseBdev2", 00:10:28.101 "uuid": "604cd43d-3ea4-58bb-bab2-01f857fa8a4f", 00:10:28.101 "is_configured": true, 00:10:28.101 "data_offset": 2048, 00:10:28.101 "data_size": 63488 00:10:28.101 }, 00:10:28.101 { 00:10:28.101 "name": "BaseBdev3", 00:10:28.101 "uuid": "8bb5f1a9-5751-5374-a497-ac62b964069b", 00:10:28.101 "is_configured": true, 00:10:28.101 "data_offset": 2048, 00:10:28.101 "data_size": 63488 00:10:28.101 } 00:10:28.101 ] 00:10:28.101 }' 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.101 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.669 [2024-11-26 18:57:19.754391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.669 [2024-11-26 18:57:19.754447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.669 [2024-11-26 18:57:19.758138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.669 [2024-11-26 18:57:19.758351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.669 [2024-11-26 18:57:19.758574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.669 [2024-11-26 18:57:19.758735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:28.669 { 00:10:28.669 "results": [ 00:10:28.669 { 00:10:28.669 "job": "raid_bdev1", 00:10:28.669 "core_mask": "0x1", 00:10:28.669 "workload": "randrw", 00:10:28.669 "percentage": 50, 00:10:28.669 "status": "finished", 00:10:28.669 "queue_depth": 1, 00:10:28.669 "io_size": 131072, 00:10:28.669 "runtime": 1.403361, 00:10:28.669 "iops": 10472.715145995933, 00:10:28.669 "mibps": 1309.0893932494916, 00:10:28.669 "io_failed": 0, 00:10:28.669 "io_timeout": 0, 00:10:28.669 "avg_latency_us": 91.23251127317263, 00:10:28.669 "min_latency_us": 41.89090909090909, 00:10:28.669 "max_latency_us": 1846.9236363636364 00:10:28.669 } 00:10:28.669 ], 00:10:28.669 "core_count": 1 00:10:28.669 } 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69353 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69353 ']' 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69353 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69353 00:10:28.669 killing process with pid 69353 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69353' 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69353 00:10:28.669 [2024-11-26 18:57:19.795653] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.669 18:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69353 00:10:28.669 [2024-11-26 18:57:20.004198] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wN34ZWOArC 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:30.043 00:10:30.043 real 0m4.774s 00:10:30.043 user 0m5.904s 00:10:30.043 sys 0m0.624s 00:10:30.043 ************************************ 00:10:30.043 END TEST raid_write_error_test 00:10:30.043 ************************************ 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.043 18:57:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.043 18:57:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:30.043 18:57:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:30.043 18:57:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:30.043 18:57:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:30.043 18:57:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.043 18:57:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.043 ************************************ 00:10:30.043 START TEST raid_state_function_test 00:10:30.043 ************************************ 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:30.043 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:30.044 Process raid pid: 69497 00:10:30.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69497 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69497' 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69497 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69497 ']' 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.044 18:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.044 [2024-11-26 18:57:21.292086] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:10:30.044 [2024-11-26 18:57:21.292663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.302 [2024-11-26 18:57:21.478410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.302 [2024-11-26 18:57:21.606999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.559 [2024-11-26 18:57:21.815391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.559 [2024-11-26 18:57:21.815452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.126 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.127 [2024-11-26 18:57:22.310318] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.127 [2024-11-26 18:57:22.310415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.127 [2024-11-26 18:57:22.310433] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.127 [2024-11-26 18:57:22.310450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.127 [2024-11-26 18:57:22.310460] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.127 [2024-11-26 18:57:22.310474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.127 [2024-11-26 18:57:22.310484] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.127 [2024-11-26 18:57:22.310499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.127 "name": "Existed_Raid", 00:10:31.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.127 "strip_size_kb": 64, 00:10:31.127 "state": "configuring", 00:10:31.127 "raid_level": "raid0", 00:10:31.127 "superblock": false, 00:10:31.127 "num_base_bdevs": 4, 00:10:31.127 "num_base_bdevs_discovered": 0, 00:10:31.127 "num_base_bdevs_operational": 4, 00:10:31.127 "base_bdevs_list": [ 00:10:31.127 { 00:10:31.127 "name": "BaseBdev1", 00:10:31.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.127 "is_configured": false, 00:10:31.127 "data_offset": 0, 00:10:31.127 "data_size": 0 00:10:31.127 }, 00:10:31.127 { 00:10:31.127 "name": "BaseBdev2", 00:10:31.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.127 "is_configured": false, 00:10:31.127 "data_offset": 0, 00:10:31.127 "data_size": 0 00:10:31.127 }, 00:10:31.127 { 00:10:31.127 "name": "BaseBdev3", 00:10:31.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.127 "is_configured": false, 00:10:31.127 "data_offset": 0, 00:10:31.127 "data_size": 0 00:10:31.127 }, 00:10:31.127 { 00:10:31.127 "name": "BaseBdev4", 00:10:31.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.127 "is_configured": false, 00:10:31.127 "data_offset": 0, 00:10:31.127 "data_size": 0 00:10:31.127 } 00:10:31.127 ] 00:10:31.127 }' 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.127 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.694 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.694 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.694 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.694 [2024-11-26 18:57:22.818401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.694 [2024-11-26 18:57:22.818468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:31.694 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 [2024-11-26 18:57:22.826383] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.695 [2024-11-26 18:57:22.826447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.695 [2024-11-26 18:57:22.826479] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.695 [2024-11-26 18:57:22.826495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.695 [2024-11-26 18:57:22.826505] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.695 [2024-11-26 18:57:22.826520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.695 [2024-11-26 18:57:22.826529] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.695 [2024-11-26 18:57:22.826544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 [2024-11-26 18:57:22.872095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.695 BaseBdev1 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 [ 00:10:31.695 { 00:10:31.695 "name": "BaseBdev1", 00:10:31.695 "aliases": [ 00:10:31.695 "4df52b76-af6c-4d60-afd9-41104e95cb1b" 00:10:31.695 ], 00:10:31.695 "product_name": "Malloc disk", 00:10:31.695 "block_size": 512, 00:10:31.695 "num_blocks": 65536, 00:10:31.695 "uuid": "4df52b76-af6c-4d60-afd9-41104e95cb1b", 00:10:31.695 "assigned_rate_limits": { 00:10:31.695 "rw_ios_per_sec": 0, 00:10:31.695 "rw_mbytes_per_sec": 0, 00:10:31.695 "r_mbytes_per_sec": 0, 00:10:31.695 "w_mbytes_per_sec": 0 00:10:31.695 }, 00:10:31.695 "claimed": true, 00:10:31.695 "claim_type": "exclusive_write", 00:10:31.695 "zoned": false, 00:10:31.695 "supported_io_types": { 00:10:31.695 "read": true, 00:10:31.695 "write": true, 00:10:31.695 "unmap": true, 00:10:31.695 "flush": true, 00:10:31.695 "reset": true, 00:10:31.695 "nvme_admin": false, 00:10:31.695 "nvme_io": false, 00:10:31.695 "nvme_io_md": false, 00:10:31.695 "write_zeroes": true, 00:10:31.695 "zcopy": true, 00:10:31.695 "get_zone_info": false, 00:10:31.695 "zone_management": false, 00:10:31.695 "zone_append": false, 00:10:31.695 "compare": false, 00:10:31.695 "compare_and_write": false, 00:10:31.695 "abort": true, 00:10:31.695 "seek_hole": false, 00:10:31.695 "seek_data": false, 00:10:31.695 "copy": true, 00:10:31.695 "nvme_iov_md": false 00:10:31.695 }, 00:10:31.695 "memory_domains": [ 00:10:31.695 { 00:10:31.695 "dma_device_id": "system", 00:10:31.695 "dma_device_type": 1 00:10:31.695 }, 00:10:31.695 { 00:10:31.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.695 "dma_device_type": 2 00:10:31.695 } 00:10:31.695 ], 00:10:31.695 "driver_specific": {} 00:10:31.695 } 00:10:31.695 ] 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.695 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.695 "name": "Existed_Raid", 00:10:31.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.695 "strip_size_kb": 64, 00:10:31.695 "state": "configuring", 00:10:31.695 "raid_level": "raid0", 00:10:31.695 "superblock": false, 00:10:31.695 "num_base_bdevs": 4, 00:10:31.695 "num_base_bdevs_discovered": 1, 00:10:31.695 "num_base_bdevs_operational": 4, 00:10:31.695 "base_bdevs_list": [ 00:10:31.695 { 00:10:31.695 "name": "BaseBdev1", 00:10:31.695 "uuid": "4df52b76-af6c-4d60-afd9-41104e95cb1b", 00:10:31.695 "is_configured": true, 00:10:31.696 "data_offset": 0, 00:10:31.696 "data_size": 65536 00:10:31.696 }, 00:10:31.696 { 00:10:31.696 "name": "BaseBdev2", 00:10:31.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.696 "is_configured": false, 00:10:31.696 "data_offset": 0, 00:10:31.696 "data_size": 0 00:10:31.696 }, 00:10:31.696 { 00:10:31.696 "name": "BaseBdev3", 00:10:31.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.696 "is_configured": false, 00:10:31.696 "data_offset": 0, 00:10:31.696 "data_size": 0 00:10:31.696 }, 00:10:31.696 { 00:10:31.696 "name": "BaseBdev4", 00:10:31.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.696 "is_configured": false, 00:10:31.696 "data_offset": 0, 00:10:31.696 "data_size": 0 00:10:31.696 } 00:10:31.696 ] 00:10:31.696 }' 00:10:31.696 18:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.696 18:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.262 [2024-11-26 18:57:23.416327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.262 [2024-11-26 18:57:23.416406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.262 [2024-11-26 18:57:23.424386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.262 [2024-11-26 18:57:23.426943] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.262 [2024-11-26 18:57:23.427007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.262 [2024-11-26 18:57:23.427024] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.262 [2024-11-26 18:57:23.427041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.262 [2024-11-26 18:57:23.427051] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.262 [2024-11-26 18:57:23.427064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.262 "name": "Existed_Raid", 00:10:32.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.262 "strip_size_kb": 64, 00:10:32.262 "state": "configuring", 00:10:32.262 "raid_level": "raid0", 00:10:32.262 "superblock": false, 00:10:32.262 "num_base_bdevs": 4, 00:10:32.262 "num_base_bdevs_discovered": 1, 00:10:32.262 "num_base_bdevs_operational": 4, 00:10:32.262 "base_bdevs_list": [ 00:10:32.262 { 00:10:32.262 "name": "BaseBdev1", 00:10:32.262 "uuid": "4df52b76-af6c-4d60-afd9-41104e95cb1b", 00:10:32.262 "is_configured": true, 00:10:32.262 "data_offset": 0, 00:10:32.262 "data_size": 65536 00:10:32.262 }, 00:10:32.262 { 00:10:32.262 "name": "BaseBdev2", 00:10:32.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.262 "is_configured": false, 00:10:32.262 "data_offset": 0, 00:10:32.262 "data_size": 0 00:10:32.262 }, 00:10:32.262 { 00:10:32.262 "name": "BaseBdev3", 00:10:32.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.262 "is_configured": false, 00:10:32.262 "data_offset": 0, 00:10:32.262 "data_size": 0 00:10:32.262 }, 00:10:32.262 { 00:10:32.262 "name": "BaseBdev4", 00:10:32.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.262 "is_configured": false, 00:10:32.262 "data_offset": 0, 00:10:32.262 "data_size": 0 00:10:32.262 } 00:10:32.262 ] 00:10:32.262 }' 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.262 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.583 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:32.583 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.583 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.843 [2024-11-26 18:57:23.982797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.843 BaseBdev2 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.843 18:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.843 [ 00:10:32.843 { 00:10:32.843 "name": "BaseBdev2", 00:10:32.843 "aliases": [ 00:10:32.843 "4f173918-ea79-442a-b791-7e8d2697a964" 00:10:32.843 ], 00:10:32.843 "product_name": "Malloc disk", 00:10:32.843 "block_size": 512, 00:10:32.843 "num_blocks": 65536, 00:10:32.843 "uuid": "4f173918-ea79-442a-b791-7e8d2697a964", 00:10:32.843 "assigned_rate_limits": { 00:10:32.843 "rw_ios_per_sec": 0, 00:10:32.843 "rw_mbytes_per_sec": 0, 00:10:32.843 "r_mbytes_per_sec": 0, 00:10:32.843 "w_mbytes_per_sec": 0 00:10:32.843 }, 00:10:32.843 "claimed": true, 00:10:32.843 "claim_type": "exclusive_write", 00:10:32.843 "zoned": false, 00:10:32.843 "supported_io_types": { 00:10:32.843 "read": true, 00:10:32.843 "write": true, 00:10:32.843 "unmap": true, 00:10:32.843 "flush": true, 00:10:32.843 "reset": true, 00:10:32.843 "nvme_admin": false, 00:10:32.843 "nvme_io": false, 00:10:32.843 "nvme_io_md": false, 00:10:32.843 "write_zeroes": true, 00:10:32.843 "zcopy": true, 00:10:32.843 "get_zone_info": false, 00:10:32.843 "zone_management": false, 00:10:32.843 "zone_append": false, 00:10:32.843 "compare": false, 00:10:32.843 "compare_and_write": false, 00:10:32.843 "abort": true, 00:10:32.843 "seek_hole": false, 00:10:32.843 "seek_data": false, 00:10:32.843 "copy": true, 00:10:32.843 "nvme_iov_md": false 00:10:32.843 }, 00:10:32.843 "memory_domains": [ 00:10:32.843 { 00:10:32.843 "dma_device_id": "system", 00:10:32.843 "dma_device_type": 1 00:10:32.843 }, 00:10:32.844 { 00:10:32.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.844 "dma_device_type": 2 00:10:32.844 } 00:10:32.844 ], 00:10:32.844 "driver_specific": {} 00:10:32.844 } 00:10:32.844 ] 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.844 "name": "Existed_Raid", 00:10:32.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.844 "strip_size_kb": 64, 00:10:32.844 "state": "configuring", 00:10:32.844 "raid_level": "raid0", 00:10:32.844 "superblock": false, 00:10:32.844 "num_base_bdevs": 4, 00:10:32.844 "num_base_bdevs_discovered": 2, 00:10:32.844 "num_base_bdevs_operational": 4, 00:10:32.844 "base_bdevs_list": [ 00:10:32.844 { 00:10:32.844 "name": "BaseBdev1", 00:10:32.844 "uuid": "4df52b76-af6c-4d60-afd9-41104e95cb1b", 00:10:32.844 "is_configured": true, 00:10:32.844 "data_offset": 0, 00:10:32.844 "data_size": 65536 00:10:32.844 }, 00:10:32.844 { 00:10:32.844 "name": "BaseBdev2", 00:10:32.844 "uuid": "4f173918-ea79-442a-b791-7e8d2697a964", 00:10:32.844 "is_configured": true, 00:10:32.844 "data_offset": 0, 00:10:32.844 "data_size": 65536 00:10:32.844 }, 00:10:32.844 { 00:10:32.844 "name": "BaseBdev3", 00:10:32.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.844 "is_configured": false, 00:10:32.844 "data_offset": 0, 00:10:32.844 "data_size": 0 00:10:32.844 }, 00:10:32.844 { 00:10:32.844 "name": "BaseBdev4", 00:10:32.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.844 "is_configured": false, 00:10:32.844 "data_offset": 0, 00:10:32.844 "data_size": 0 00:10:32.844 } 00:10:32.844 ] 00:10:32.844 }' 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.844 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.411 [2024-11-26 18:57:24.566623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.411 BaseBdev3 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.411 [ 00:10:33.411 { 00:10:33.411 "name": "BaseBdev3", 00:10:33.411 "aliases": [ 00:10:33.411 "26f0a029-56ba-4117-90de-55f727846a9a" 00:10:33.411 ], 00:10:33.411 "product_name": "Malloc disk", 00:10:33.411 "block_size": 512, 00:10:33.411 "num_blocks": 65536, 00:10:33.411 "uuid": "26f0a029-56ba-4117-90de-55f727846a9a", 00:10:33.411 "assigned_rate_limits": { 00:10:33.411 "rw_ios_per_sec": 0, 00:10:33.411 "rw_mbytes_per_sec": 0, 00:10:33.411 "r_mbytes_per_sec": 0, 00:10:33.411 "w_mbytes_per_sec": 0 00:10:33.411 }, 00:10:33.411 "claimed": true, 00:10:33.411 "claim_type": "exclusive_write", 00:10:33.411 "zoned": false, 00:10:33.411 "supported_io_types": { 00:10:33.411 "read": true, 00:10:33.411 "write": true, 00:10:33.411 "unmap": true, 00:10:33.411 "flush": true, 00:10:33.411 "reset": true, 00:10:33.411 "nvme_admin": false, 00:10:33.411 "nvme_io": false, 00:10:33.411 "nvme_io_md": false, 00:10:33.411 "write_zeroes": true, 00:10:33.411 "zcopy": true, 00:10:33.411 "get_zone_info": false, 00:10:33.411 "zone_management": false, 00:10:33.411 "zone_append": false, 00:10:33.411 "compare": false, 00:10:33.411 "compare_and_write": false, 00:10:33.411 "abort": true, 00:10:33.411 "seek_hole": false, 00:10:33.411 "seek_data": false, 00:10:33.411 "copy": true, 00:10:33.411 "nvme_iov_md": false 00:10:33.411 }, 00:10:33.411 "memory_domains": [ 00:10:33.411 { 00:10:33.411 "dma_device_id": "system", 00:10:33.411 "dma_device_type": 1 00:10:33.411 }, 00:10:33.411 { 00:10:33.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.411 "dma_device_type": 2 00:10:33.411 } 00:10:33.411 ], 00:10:33.411 "driver_specific": {} 00:10:33.411 } 00:10:33.411 ] 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.411 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.412 "name": "Existed_Raid", 00:10:33.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.412 "strip_size_kb": 64, 00:10:33.412 "state": "configuring", 00:10:33.412 "raid_level": "raid0", 00:10:33.412 "superblock": false, 00:10:33.412 "num_base_bdevs": 4, 00:10:33.412 "num_base_bdevs_discovered": 3, 00:10:33.412 "num_base_bdevs_operational": 4, 00:10:33.412 "base_bdevs_list": [ 00:10:33.412 { 00:10:33.412 "name": "BaseBdev1", 00:10:33.412 "uuid": "4df52b76-af6c-4d60-afd9-41104e95cb1b", 00:10:33.412 "is_configured": true, 00:10:33.412 "data_offset": 0, 00:10:33.412 "data_size": 65536 00:10:33.412 }, 00:10:33.412 { 00:10:33.412 "name": "BaseBdev2", 00:10:33.412 "uuid": "4f173918-ea79-442a-b791-7e8d2697a964", 00:10:33.412 "is_configured": true, 00:10:33.412 "data_offset": 0, 00:10:33.412 "data_size": 65536 00:10:33.412 }, 00:10:33.412 { 00:10:33.412 "name": "BaseBdev3", 00:10:33.412 "uuid": "26f0a029-56ba-4117-90de-55f727846a9a", 00:10:33.412 "is_configured": true, 00:10:33.412 "data_offset": 0, 00:10:33.412 "data_size": 65536 00:10:33.412 }, 00:10:33.412 { 00:10:33.412 "name": "BaseBdev4", 00:10:33.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.412 "is_configured": false, 00:10:33.412 "data_offset": 0, 00:10:33.412 "data_size": 0 00:10:33.412 } 00:10:33.412 ] 00:10:33.412 }' 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.412 18:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 [2024-11-26 18:57:25.186673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:33.979 BaseBdev4 00:10:33.979 [2024-11-26 18:57:25.186980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:33.979 [2024-11-26 18:57:25.187008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:33.979 [2024-11-26 18:57:25.187392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:33.979 [2024-11-26 18:57:25.187617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:33.979 [2024-11-26 18:57:25.187642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:33.979 [2024-11-26 18:57:25.187974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 [ 00:10:33.979 { 00:10:33.979 "name": "BaseBdev4", 00:10:33.979 "aliases": [ 00:10:33.979 "c8688c0b-794f-4f64-9418-4b0db59f36f1" 00:10:33.979 ], 00:10:33.979 "product_name": "Malloc disk", 00:10:33.979 "block_size": 512, 00:10:33.979 "num_blocks": 65536, 00:10:33.979 "uuid": "c8688c0b-794f-4f64-9418-4b0db59f36f1", 00:10:33.979 "assigned_rate_limits": { 00:10:33.979 "rw_ios_per_sec": 0, 00:10:33.979 "rw_mbytes_per_sec": 0, 00:10:33.979 "r_mbytes_per_sec": 0, 00:10:33.979 "w_mbytes_per_sec": 0 00:10:33.979 }, 00:10:33.979 "claimed": true, 00:10:33.979 "claim_type": "exclusive_write", 00:10:33.979 "zoned": false, 00:10:33.979 "supported_io_types": { 00:10:33.979 "read": true, 00:10:33.979 "write": true, 00:10:33.979 "unmap": true, 00:10:33.979 "flush": true, 00:10:33.979 "reset": true, 00:10:33.979 "nvme_admin": false, 00:10:33.979 "nvme_io": false, 00:10:33.979 "nvme_io_md": false, 00:10:33.979 "write_zeroes": true, 00:10:33.979 "zcopy": true, 00:10:33.979 "get_zone_info": false, 00:10:33.979 "zone_management": false, 00:10:33.979 "zone_append": false, 00:10:33.979 "compare": false, 00:10:33.979 "compare_and_write": false, 00:10:33.979 "abort": true, 00:10:33.979 "seek_hole": false, 00:10:33.979 "seek_data": false, 00:10:33.979 "copy": true, 00:10:33.979 "nvme_iov_md": false 00:10:33.979 }, 00:10:33.979 "memory_domains": [ 00:10:33.979 { 00:10:33.979 "dma_device_id": "system", 00:10:33.979 "dma_device_type": 1 00:10:33.979 }, 00:10:33.979 { 00:10:33.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.979 "dma_device_type": 2 00:10:33.979 } 00:10:33.979 ], 00:10:33.979 "driver_specific": {} 00:10:33.979 } 00:10:33.979 ] 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.979 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.979 "name": "Existed_Raid", 00:10:33.979 "uuid": "a88e7244-5af4-4d83-9fd6-1c88ee0e2bbc", 00:10:33.979 "strip_size_kb": 64, 00:10:33.979 "state": "online", 00:10:33.979 "raid_level": "raid0", 00:10:33.979 "superblock": false, 00:10:33.979 "num_base_bdevs": 4, 00:10:33.979 "num_base_bdevs_discovered": 4, 00:10:33.979 "num_base_bdevs_operational": 4, 00:10:33.979 "base_bdevs_list": [ 00:10:33.979 { 00:10:33.979 "name": "BaseBdev1", 00:10:33.979 "uuid": "4df52b76-af6c-4d60-afd9-41104e95cb1b", 00:10:33.979 "is_configured": true, 00:10:33.979 "data_offset": 0, 00:10:33.979 "data_size": 65536 00:10:33.979 }, 00:10:33.979 { 00:10:33.979 "name": "BaseBdev2", 00:10:33.979 "uuid": "4f173918-ea79-442a-b791-7e8d2697a964", 00:10:33.979 "is_configured": true, 00:10:33.979 "data_offset": 0, 00:10:33.979 "data_size": 65536 00:10:33.979 }, 00:10:33.979 { 00:10:33.979 "name": "BaseBdev3", 00:10:33.979 "uuid": "26f0a029-56ba-4117-90de-55f727846a9a", 00:10:33.979 "is_configured": true, 00:10:33.979 "data_offset": 0, 00:10:33.979 "data_size": 65536 00:10:33.979 }, 00:10:33.979 { 00:10:33.979 "name": "BaseBdev4", 00:10:33.979 "uuid": "c8688c0b-794f-4f64-9418-4b0db59f36f1", 00:10:33.979 "is_configured": true, 00:10:33.979 "data_offset": 0, 00:10:33.980 "data_size": 65536 00:10:33.980 } 00:10:33.980 ] 00:10:33.980 }' 00:10:33.980 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.980 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.548 [2024-11-26 18:57:25.747405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.548 "name": "Existed_Raid", 00:10:34.548 "aliases": [ 00:10:34.548 "a88e7244-5af4-4d83-9fd6-1c88ee0e2bbc" 00:10:34.548 ], 00:10:34.548 "product_name": "Raid Volume", 00:10:34.548 "block_size": 512, 00:10:34.548 "num_blocks": 262144, 00:10:34.548 "uuid": "a88e7244-5af4-4d83-9fd6-1c88ee0e2bbc", 00:10:34.548 "assigned_rate_limits": { 00:10:34.548 "rw_ios_per_sec": 0, 00:10:34.548 "rw_mbytes_per_sec": 0, 00:10:34.548 "r_mbytes_per_sec": 0, 00:10:34.548 "w_mbytes_per_sec": 0 00:10:34.548 }, 00:10:34.548 "claimed": false, 00:10:34.548 "zoned": false, 00:10:34.548 "supported_io_types": { 00:10:34.548 "read": true, 00:10:34.548 "write": true, 00:10:34.548 "unmap": true, 00:10:34.548 "flush": true, 00:10:34.548 "reset": true, 00:10:34.548 "nvme_admin": false, 00:10:34.548 "nvme_io": false, 00:10:34.548 "nvme_io_md": false, 00:10:34.548 "write_zeroes": true, 00:10:34.548 "zcopy": false, 00:10:34.548 "get_zone_info": false, 00:10:34.548 "zone_management": false, 00:10:34.548 "zone_append": false, 00:10:34.548 "compare": false, 00:10:34.548 "compare_and_write": false, 00:10:34.548 "abort": false, 00:10:34.548 "seek_hole": false, 00:10:34.548 "seek_data": false, 00:10:34.548 "copy": false, 00:10:34.548 "nvme_iov_md": false 00:10:34.548 }, 00:10:34.548 "memory_domains": [ 00:10:34.548 { 00:10:34.548 "dma_device_id": "system", 00:10:34.548 "dma_device_type": 1 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.548 "dma_device_type": 2 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "dma_device_id": "system", 00:10:34.548 "dma_device_type": 1 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.548 "dma_device_type": 2 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "dma_device_id": "system", 00:10:34.548 "dma_device_type": 1 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.548 "dma_device_type": 2 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "dma_device_id": "system", 00:10:34.548 "dma_device_type": 1 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.548 "dma_device_type": 2 00:10:34.548 } 00:10:34.548 ], 00:10:34.548 "driver_specific": { 00:10:34.548 "raid": { 00:10:34.548 "uuid": "a88e7244-5af4-4d83-9fd6-1c88ee0e2bbc", 00:10:34.548 "strip_size_kb": 64, 00:10:34.548 "state": "online", 00:10:34.548 "raid_level": "raid0", 00:10:34.548 "superblock": false, 00:10:34.548 "num_base_bdevs": 4, 00:10:34.548 "num_base_bdevs_discovered": 4, 00:10:34.548 "num_base_bdevs_operational": 4, 00:10:34.548 "base_bdevs_list": [ 00:10:34.548 { 00:10:34.548 "name": "BaseBdev1", 00:10:34.548 "uuid": "4df52b76-af6c-4d60-afd9-41104e95cb1b", 00:10:34.548 "is_configured": true, 00:10:34.548 "data_offset": 0, 00:10:34.548 "data_size": 65536 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "name": "BaseBdev2", 00:10:34.548 "uuid": "4f173918-ea79-442a-b791-7e8d2697a964", 00:10:34.548 "is_configured": true, 00:10:34.548 "data_offset": 0, 00:10:34.548 "data_size": 65536 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "name": "BaseBdev3", 00:10:34.548 "uuid": "26f0a029-56ba-4117-90de-55f727846a9a", 00:10:34.548 "is_configured": true, 00:10:34.548 "data_offset": 0, 00:10:34.548 "data_size": 65536 00:10:34.548 }, 00:10:34.548 { 00:10:34.548 "name": "BaseBdev4", 00:10:34.548 "uuid": "c8688c0b-794f-4f64-9418-4b0db59f36f1", 00:10:34.548 "is_configured": true, 00:10:34.548 "data_offset": 0, 00:10:34.548 "data_size": 65536 00:10:34.548 } 00:10:34.548 ] 00:10:34.548 } 00:10:34.548 } 00:10:34.548 }' 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:34.548 BaseBdev2 00:10:34.548 BaseBdev3 00:10:34.548 BaseBdev4' 00:10:34.548 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.549 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.549 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.549 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:34.549 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.549 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.549 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.808 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.808 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.808 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.808 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.808 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.808 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.808 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.808 18:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.808 18:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.808 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.808 [2024-11-26 18:57:26.123148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.808 [2024-11-26 18:57:26.123351] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.808 [2024-11-26 18:57:26.123588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.068 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.068 "name": "Existed_Raid", 00:10:35.068 "uuid": "a88e7244-5af4-4d83-9fd6-1c88ee0e2bbc", 00:10:35.068 "strip_size_kb": 64, 00:10:35.068 "state": "offline", 00:10:35.068 "raid_level": "raid0", 00:10:35.068 "superblock": false, 00:10:35.068 "num_base_bdevs": 4, 00:10:35.068 "num_base_bdevs_discovered": 3, 00:10:35.068 "num_base_bdevs_operational": 3, 00:10:35.068 "base_bdevs_list": [ 00:10:35.068 { 00:10:35.068 "name": null, 00:10:35.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.068 "is_configured": false, 00:10:35.068 "data_offset": 0, 00:10:35.068 "data_size": 65536 00:10:35.068 }, 00:10:35.068 { 00:10:35.068 "name": "BaseBdev2", 00:10:35.068 "uuid": "4f173918-ea79-442a-b791-7e8d2697a964", 00:10:35.068 "is_configured": true, 00:10:35.068 "data_offset": 0, 00:10:35.068 "data_size": 65536 00:10:35.068 }, 00:10:35.068 { 00:10:35.068 "name": "BaseBdev3", 00:10:35.068 "uuid": "26f0a029-56ba-4117-90de-55f727846a9a", 00:10:35.068 "is_configured": true, 00:10:35.068 "data_offset": 0, 00:10:35.068 "data_size": 65536 00:10:35.068 }, 00:10:35.068 { 00:10:35.068 "name": "BaseBdev4", 00:10:35.068 "uuid": "c8688c0b-794f-4f64-9418-4b0db59f36f1", 00:10:35.069 "is_configured": true, 00:10:35.069 "data_offset": 0, 00:10:35.069 "data_size": 65536 00:10:35.069 } 00:10:35.069 ] 00:10:35.069 }' 00:10:35.069 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.069 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.643 [2024-11-26 18:57:26.774077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.643 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.644 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.644 18:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:35.644 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.644 18:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.644 [2024-11-26 18:57:26.919662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.947 [2024-11-26 18:57:27.065759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:35.947 [2024-11-26 18:57:27.065977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:35.947 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.948 BaseBdev2 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.948 [ 00:10:35.948 { 00:10:35.948 "name": "BaseBdev2", 00:10:35.948 "aliases": [ 00:10:35.948 "508626a2-2afb-4f6b-91a0-ec28a28b729a" 00:10:35.948 ], 00:10:35.948 "product_name": "Malloc disk", 00:10:35.948 "block_size": 512, 00:10:35.948 "num_blocks": 65536, 00:10:35.948 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:35.948 "assigned_rate_limits": { 00:10:35.948 "rw_ios_per_sec": 0, 00:10:35.948 "rw_mbytes_per_sec": 0, 00:10:35.948 "r_mbytes_per_sec": 0, 00:10:35.948 "w_mbytes_per_sec": 0 00:10:35.948 }, 00:10:35.948 "claimed": false, 00:10:35.948 "zoned": false, 00:10:35.948 "supported_io_types": { 00:10:35.948 "read": true, 00:10:35.948 "write": true, 00:10:35.948 "unmap": true, 00:10:35.948 "flush": true, 00:10:35.948 "reset": true, 00:10:35.948 "nvme_admin": false, 00:10:35.948 "nvme_io": false, 00:10:35.948 "nvme_io_md": false, 00:10:35.948 "write_zeroes": true, 00:10:35.948 "zcopy": true, 00:10:35.948 "get_zone_info": false, 00:10:35.948 "zone_management": false, 00:10:35.948 "zone_append": false, 00:10:35.948 "compare": false, 00:10:35.948 "compare_and_write": false, 00:10:35.948 "abort": true, 00:10:35.948 "seek_hole": false, 00:10:35.948 "seek_data": false, 00:10:35.948 "copy": true, 00:10:35.948 "nvme_iov_md": false 00:10:35.948 }, 00:10:35.948 "memory_domains": [ 00:10:35.948 { 00:10:35.948 "dma_device_id": "system", 00:10:35.948 "dma_device_type": 1 00:10:35.948 }, 00:10:35.948 { 00:10:35.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.948 "dma_device_type": 2 00:10:35.948 } 00:10:35.948 ], 00:10:35.948 "driver_specific": {} 00:10:35.948 } 00:10:35.948 ] 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.948 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.223 BaseBdev3 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.223 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.223 [ 00:10:36.223 { 00:10:36.223 "name": "BaseBdev3", 00:10:36.223 "aliases": [ 00:10:36.223 "a731ef13-1d70-4458-8d11-a981ea68642c" 00:10:36.223 ], 00:10:36.223 "product_name": "Malloc disk", 00:10:36.223 "block_size": 512, 00:10:36.223 "num_blocks": 65536, 00:10:36.223 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:36.223 "assigned_rate_limits": { 00:10:36.223 "rw_ios_per_sec": 0, 00:10:36.223 "rw_mbytes_per_sec": 0, 00:10:36.223 "r_mbytes_per_sec": 0, 00:10:36.223 "w_mbytes_per_sec": 0 00:10:36.223 }, 00:10:36.223 "claimed": false, 00:10:36.223 "zoned": false, 00:10:36.223 "supported_io_types": { 00:10:36.223 "read": true, 00:10:36.223 "write": true, 00:10:36.223 "unmap": true, 00:10:36.223 "flush": true, 00:10:36.223 "reset": true, 00:10:36.223 "nvme_admin": false, 00:10:36.223 "nvme_io": false, 00:10:36.223 "nvme_io_md": false, 00:10:36.223 "write_zeroes": true, 00:10:36.223 "zcopy": true, 00:10:36.223 "get_zone_info": false, 00:10:36.223 "zone_management": false, 00:10:36.223 "zone_append": false, 00:10:36.223 "compare": false, 00:10:36.223 "compare_and_write": false, 00:10:36.223 "abort": true, 00:10:36.223 "seek_hole": false, 00:10:36.223 "seek_data": false, 00:10:36.223 "copy": true, 00:10:36.223 "nvme_iov_md": false 00:10:36.223 }, 00:10:36.223 "memory_domains": [ 00:10:36.224 { 00:10:36.224 "dma_device_id": "system", 00:10:36.224 "dma_device_type": 1 00:10:36.224 }, 00:10:36.224 { 00:10:36.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.224 "dma_device_type": 2 00:10:36.224 } 00:10:36.224 ], 00:10:36.224 "driver_specific": {} 00:10:36.224 } 00:10:36.224 ] 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.224 BaseBdev4 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.224 [ 00:10:36.224 { 00:10:36.224 "name": "BaseBdev4", 00:10:36.224 "aliases": [ 00:10:36.224 "8170853d-55d0-48e4-bfa5-b7a48095e651" 00:10:36.224 ], 00:10:36.224 "product_name": "Malloc disk", 00:10:36.224 "block_size": 512, 00:10:36.224 "num_blocks": 65536, 00:10:36.224 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:36.224 "assigned_rate_limits": { 00:10:36.224 "rw_ios_per_sec": 0, 00:10:36.224 "rw_mbytes_per_sec": 0, 00:10:36.224 "r_mbytes_per_sec": 0, 00:10:36.224 "w_mbytes_per_sec": 0 00:10:36.224 }, 00:10:36.224 "claimed": false, 00:10:36.224 "zoned": false, 00:10:36.224 "supported_io_types": { 00:10:36.224 "read": true, 00:10:36.224 "write": true, 00:10:36.224 "unmap": true, 00:10:36.224 "flush": true, 00:10:36.224 "reset": true, 00:10:36.224 "nvme_admin": false, 00:10:36.224 "nvme_io": false, 00:10:36.224 "nvme_io_md": false, 00:10:36.224 "write_zeroes": true, 00:10:36.224 "zcopy": true, 00:10:36.224 "get_zone_info": false, 00:10:36.224 "zone_management": false, 00:10:36.224 "zone_append": false, 00:10:36.224 "compare": false, 00:10:36.224 "compare_and_write": false, 00:10:36.224 "abort": true, 00:10:36.224 "seek_hole": false, 00:10:36.224 "seek_data": false, 00:10:36.224 "copy": true, 00:10:36.224 "nvme_iov_md": false 00:10:36.224 }, 00:10:36.224 "memory_domains": [ 00:10:36.224 { 00:10:36.224 "dma_device_id": "system", 00:10:36.224 "dma_device_type": 1 00:10:36.224 }, 00:10:36.224 { 00:10:36.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.224 "dma_device_type": 2 00:10:36.224 } 00:10:36.224 ], 00:10:36.224 "driver_specific": {} 00:10:36.224 } 00:10:36.224 ] 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.224 [2024-11-26 18:57:27.431319] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.224 [2024-11-26 18:57:27.431508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.224 [2024-11-26 18:57:27.431652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.224 [2024-11-26 18:57:27.434198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.224 [2024-11-26 18:57:27.434439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.224 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.225 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.225 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.225 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.225 "name": "Existed_Raid", 00:10:36.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.225 "strip_size_kb": 64, 00:10:36.225 "state": "configuring", 00:10:36.225 "raid_level": "raid0", 00:10:36.225 "superblock": false, 00:10:36.225 "num_base_bdevs": 4, 00:10:36.225 "num_base_bdevs_discovered": 3, 00:10:36.225 "num_base_bdevs_operational": 4, 00:10:36.225 "base_bdevs_list": [ 00:10:36.225 { 00:10:36.225 "name": "BaseBdev1", 00:10:36.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.225 "is_configured": false, 00:10:36.225 "data_offset": 0, 00:10:36.225 "data_size": 0 00:10:36.225 }, 00:10:36.225 { 00:10:36.225 "name": "BaseBdev2", 00:10:36.225 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:36.225 "is_configured": true, 00:10:36.225 "data_offset": 0, 00:10:36.225 "data_size": 65536 00:10:36.225 }, 00:10:36.225 { 00:10:36.225 "name": "BaseBdev3", 00:10:36.225 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:36.225 "is_configured": true, 00:10:36.225 "data_offset": 0, 00:10:36.225 "data_size": 65536 00:10:36.225 }, 00:10:36.225 { 00:10:36.225 "name": "BaseBdev4", 00:10:36.225 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:36.225 "is_configured": true, 00:10:36.225 "data_offset": 0, 00:10:36.225 "data_size": 65536 00:10:36.225 } 00:10:36.225 ] 00:10:36.225 }' 00:10:36.225 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.225 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.792 [2024-11-26 18:57:27.983495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.792 18:57:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.792 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.792 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.792 "name": "Existed_Raid", 00:10:36.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.792 "strip_size_kb": 64, 00:10:36.792 "state": "configuring", 00:10:36.792 "raid_level": "raid0", 00:10:36.792 "superblock": false, 00:10:36.792 "num_base_bdevs": 4, 00:10:36.792 "num_base_bdevs_discovered": 2, 00:10:36.792 "num_base_bdevs_operational": 4, 00:10:36.792 "base_bdevs_list": [ 00:10:36.792 { 00:10:36.792 "name": "BaseBdev1", 00:10:36.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.792 "is_configured": false, 00:10:36.792 "data_offset": 0, 00:10:36.792 "data_size": 0 00:10:36.792 }, 00:10:36.792 { 00:10:36.792 "name": null, 00:10:36.792 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:36.792 "is_configured": false, 00:10:36.792 "data_offset": 0, 00:10:36.792 "data_size": 65536 00:10:36.792 }, 00:10:36.792 { 00:10:36.792 "name": "BaseBdev3", 00:10:36.792 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:36.792 "is_configured": true, 00:10:36.792 "data_offset": 0, 00:10:36.792 "data_size": 65536 00:10:36.792 }, 00:10:36.792 { 00:10:36.792 "name": "BaseBdev4", 00:10:36.792 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:36.792 "is_configured": true, 00:10:36.792 "data_offset": 0, 00:10:36.792 "data_size": 65536 00:10:36.792 } 00:10:36.792 ] 00:10:36.792 }' 00:10:36.792 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.792 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.360 [2024-11-26 18:57:28.558249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.360 BaseBdev1 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.360 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.361 [ 00:10:37.361 { 00:10:37.361 "name": "BaseBdev1", 00:10:37.361 "aliases": [ 00:10:37.361 "4280695e-2443-47a0-b1b8-3f9202ff76e4" 00:10:37.361 ], 00:10:37.361 "product_name": "Malloc disk", 00:10:37.361 "block_size": 512, 00:10:37.361 "num_blocks": 65536, 00:10:37.361 "uuid": "4280695e-2443-47a0-b1b8-3f9202ff76e4", 00:10:37.361 "assigned_rate_limits": { 00:10:37.361 "rw_ios_per_sec": 0, 00:10:37.361 "rw_mbytes_per_sec": 0, 00:10:37.361 "r_mbytes_per_sec": 0, 00:10:37.361 "w_mbytes_per_sec": 0 00:10:37.361 }, 00:10:37.361 "claimed": true, 00:10:37.361 "claim_type": "exclusive_write", 00:10:37.361 "zoned": false, 00:10:37.361 "supported_io_types": { 00:10:37.361 "read": true, 00:10:37.361 "write": true, 00:10:37.361 "unmap": true, 00:10:37.361 "flush": true, 00:10:37.361 "reset": true, 00:10:37.361 "nvme_admin": false, 00:10:37.361 "nvme_io": false, 00:10:37.361 "nvme_io_md": false, 00:10:37.361 "write_zeroes": true, 00:10:37.361 "zcopy": true, 00:10:37.361 "get_zone_info": false, 00:10:37.361 "zone_management": false, 00:10:37.361 "zone_append": false, 00:10:37.361 "compare": false, 00:10:37.361 "compare_and_write": false, 00:10:37.361 "abort": true, 00:10:37.361 "seek_hole": false, 00:10:37.361 "seek_data": false, 00:10:37.361 "copy": true, 00:10:37.361 "nvme_iov_md": false 00:10:37.361 }, 00:10:37.361 "memory_domains": [ 00:10:37.361 { 00:10:37.361 "dma_device_id": "system", 00:10:37.361 "dma_device_type": 1 00:10:37.361 }, 00:10:37.361 { 00:10:37.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.361 "dma_device_type": 2 00:10:37.361 } 00:10:37.361 ], 00:10:37.361 "driver_specific": {} 00:10:37.361 } 00:10:37.361 ] 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.361 "name": "Existed_Raid", 00:10:37.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.361 "strip_size_kb": 64, 00:10:37.361 "state": "configuring", 00:10:37.361 "raid_level": "raid0", 00:10:37.361 "superblock": false, 00:10:37.361 "num_base_bdevs": 4, 00:10:37.361 "num_base_bdevs_discovered": 3, 00:10:37.361 "num_base_bdevs_operational": 4, 00:10:37.361 "base_bdevs_list": [ 00:10:37.361 { 00:10:37.361 "name": "BaseBdev1", 00:10:37.361 "uuid": "4280695e-2443-47a0-b1b8-3f9202ff76e4", 00:10:37.361 "is_configured": true, 00:10:37.361 "data_offset": 0, 00:10:37.361 "data_size": 65536 00:10:37.361 }, 00:10:37.361 { 00:10:37.361 "name": null, 00:10:37.361 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:37.361 "is_configured": false, 00:10:37.361 "data_offset": 0, 00:10:37.361 "data_size": 65536 00:10:37.361 }, 00:10:37.361 { 00:10:37.361 "name": "BaseBdev3", 00:10:37.361 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:37.361 "is_configured": true, 00:10:37.361 "data_offset": 0, 00:10:37.361 "data_size": 65536 00:10:37.361 }, 00:10:37.361 { 00:10:37.361 "name": "BaseBdev4", 00:10:37.361 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:37.361 "is_configured": true, 00:10:37.361 "data_offset": 0, 00:10:37.361 "data_size": 65536 00:10:37.361 } 00:10:37.361 ] 00:10:37.361 }' 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.361 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.929 [2024-11-26 18:57:29.170582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.929 "name": "Existed_Raid", 00:10:37.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.929 "strip_size_kb": 64, 00:10:37.929 "state": "configuring", 00:10:37.929 "raid_level": "raid0", 00:10:37.929 "superblock": false, 00:10:37.929 "num_base_bdevs": 4, 00:10:37.929 "num_base_bdevs_discovered": 2, 00:10:37.929 "num_base_bdevs_operational": 4, 00:10:37.929 "base_bdevs_list": [ 00:10:37.929 { 00:10:37.929 "name": "BaseBdev1", 00:10:37.929 "uuid": "4280695e-2443-47a0-b1b8-3f9202ff76e4", 00:10:37.929 "is_configured": true, 00:10:37.929 "data_offset": 0, 00:10:37.929 "data_size": 65536 00:10:37.929 }, 00:10:37.929 { 00:10:37.929 "name": null, 00:10:37.929 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:37.929 "is_configured": false, 00:10:37.929 "data_offset": 0, 00:10:37.929 "data_size": 65536 00:10:37.929 }, 00:10:37.929 { 00:10:37.929 "name": null, 00:10:37.929 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:37.929 "is_configured": false, 00:10:37.929 "data_offset": 0, 00:10:37.929 "data_size": 65536 00:10:37.929 }, 00:10:37.929 { 00:10:37.929 "name": "BaseBdev4", 00:10:37.929 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:37.929 "is_configured": true, 00:10:37.929 "data_offset": 0, 00:10:37.929 "data_size": 65536 00:10:37.929 } 00:10:37.929 ] 00:10:37.929 }' 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.929 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.496 [2024-11-26 18:57:29.758731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.496 "name": "Existed_Raid", 00:10:38.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.496 "strip_size_kb": 64, 00:10:38.496 "state": "configuring", 00:10:38.496 "raid_level": "raid0", 00:10:38.496 "superblock": false, 00:10:38.496 "num_base_bdevs": 4, 00:10:38.496 "num_base_bdevs_discovered": 3, 00:10:38.496 "num_base_bdevs_operational": 4, 00:10:38.496 "base_bdevs_list": [ 00:10:38.496 { 00:10:38.496 "name": "BaseBdev1", 00:10:38.496 "uuid": "4280695e-2443-47a0-b1b8-3f9202ff76e4", 00:10:38.496 "is_configured": true, 00:10:38.496 "data_offset": 0, 00:10:38.496 "data_size": 65536 00:10:38.496 }, 00:10:38.496 { 00:10:38.496 "name": null, 00:10:38.496 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:38.496 "is_configured": false, 00:10:38.496 "data_offset": 0, 00:10:38.496 "data_size": 65536 00:10:38.496 }, 00:10:38.496 { 00:10:38.496 "name": "BaseBdev3", 00:10:38.496 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:38.496 "is_configured": true, 00:10:38.496 "data_offset": 0, 00:10:38.496 "data_size": 65536 00:10:38.496 }, 00:10:38.496 { 00:10:38.496 "name": "BaseBdev4", 00:10:38.496 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:38.496 "is_configured": true, 00:10:38.496 "data_offset": 0, 00:10:38.496 "data_size": 65536 00:10:38.496 } 00:10:38.496 ] 00:10:38.496 }' 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.496 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.067 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.067 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.067 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.067 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.067 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.067 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:39.067 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.067 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.067 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.068 [2024-11-26 18:57:30.399043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.394 "name": "Existed_Raid", 00:10:39.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.394 "strip_size_kb": 64, 00:10:39.394 "state": "configuring", 00:10:39.394 "raid_level": "raid0", 00:10:39.394 "superblock": false, 00:10:39.394 "num_base_bdevs": 4, 00:10:39.394 "num_base_bdevs_discovered": 2, 00:10:39.394 "num_base_bdevs_operational": 4, 00:10:39.394 "base_bdevs_list": [ 00:10:39.394 { 00:10:39.394 "name": null, 00:10:39.394 "uuid": "4280695e-2443-47a0-b1b8-3f9202ff76e4", 00:10:39.394 "is_configured": false, 00:10:39.394 "data_offset": 0, 00:10:39.394 "data_size": 65536 00:10:39.394 }, 00:10:39.394 { 00:10:39.394 "name": null, 00:10:39.394 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:39.394 "is_configured": false, 00:10:39.394 "data_offset": 0, 00:10:39.394 "data_size": 65536 00:10:39.394 }, 00:10:39.394 { 00:10:39.394 "name": "BaseBdev3", 00:10:39.394 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:39.394 "is_configured": true, 00:10:39.394 "data_offset": 0, 00:10:39.394 "data_size": 65536 00:10:39.394 }, 00:10:39.394 { 00:10:39.394 "name": "BaseBdev4", 00:10:39.394 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:39.394 "is_configured": true, 00:10:39.394 "data_offset": 0, 00:10:39.394 "data_size": 65536 00:10:39.394 } 00:10:39.394 ] 00:10:39.394 }' 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.394 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.978 [2024-11-26 18:57:31.096879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.978 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.979 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.979 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.979 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.979 "name": "Existed_Raid", 00:10:39.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.979 "strip_size_kb": 64, 00:10:39.979 "state": "configuring", 00:10:39.979 "raid_level": "raid0", 00:10:39.979 "superblock": false, 00:10:39.979 "num_base_bdevs": 4, 00:10:39.979 "num_base_bdevs_discovered": 3, 00:10:39.979 "num_base_bdevs_operational": 4, 00:10:39.979 "base_bdevs_list": [ 00:10:39.979 { 00:10:39.979 "name": null, 00:10:39.979 "uuid": "4280695e-2443-47a0-b1b8-3f9202ff76e4", 00:10:39.979 "is_configured": false, 00:10:39.979 "data_offset": 0, 00:10:39.979 "data_size": 65536 00:10:39.979 }, 00:10:39.979 { 00:10:39.979 "name": "BaseBdev2", 00:10:39.979 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:39.979 "is_configured": true, 00:10:39.979 "data_offset": 0, 00:10:39.979 "data_size": 65536 00:10:39.979 }, 00:10:39.979 { 00:10:39.979 "name": "BaseBdev3", 00:10:39.979 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:39.979 "is_configured": true, 00:10:39.979 "data_offset": 0, 00:10:39.979 "data_size": 65536 00:10:39.979 }, 00:10:39.979 { 00:10:39.979 "name": "BaseBdev4", 00:10:39.979 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:39.979 "is_configured": true, 00:10:39.979 "data_offset": 0, 00:10:39.979 "data_size": 65536 00:10:39.979 } 00:10:39.979 ] 00:10:39.979 }' 00:10:39.979 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.979 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.236 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.237 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.237 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.237 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4280695e-2443-47a0-b1b8-3f9202ff76e4 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.497 [2024-11-26 18:57:31.716484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:40.497 [2024-11-26 18:57:31.716816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:40.497 [2024-11-26 18:57:31.716845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:40.497 [2024-11-26 18:57:31.717232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:40.497 [2024-11-26 18:57:31.717471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:40.497 [2024-11-26 18:57:31.717490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:40.497 [2024-11-26 18:57:31.717813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.497 NewBaseBdev 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.497 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.497 [ 00:10:40.497 { 00:10:40.497 "name": "NewBaseBdev", 00:10:40.497 "aliases": [ 00:10:40.497 "4280695e-2443-47a0-b1b8-3f9202ff76e4" 00:10:40.497 ], 00:10:40.497 "product_name": "Malloc disk", 00:10:40.497 "block_size": 512, 00:10:40.497 "num_blocks": 65536, 00:10:40.497 "uuid": "4280695e-2443-47a0-b1b8-3f9202ff76e4", 00:10:40.497 "assigned_rate_limits": { 00:10:40.497 "rw_ios_per_sec": 0, 00:10:40.497 "rw_mbytes_per_sec": 0, 00:10:40.497 "r_mbytes_per_sec": 0, 00:10:40.497 "w_mbytes_per_sec": 0 00:10:40.497 }, 00:10:40.497 "claimed": true, 00:10:40.497 "claim_type": "exclusive_write", 00:10:40.497 "zoned": false, 00:10:40.497 "supported_io_types": { 00:10:40.497 "read": true, 00:10:40.497 "write": true, 00:10:40.497 "unmap": true, 00:10:40.497 "flush": true, 00:10:40.497 "reset": true, 00:10:40.497 "nvme_admin": false, 00:10:40.497 "nvme_io": false, 00:10:40.497 "nvme_io_md": false, 00:10:40.497 "write_zeroes": true, 00:10:40.497 "zcopy": true, 00:10:40.497 "get_zone_info": false, 00:10:40.497 "zone_management": false, 00:10:40.497 "zone_append": false, 00:10:40.497 "compare": false, 00:10:40.497 "compare_and_write": false, 00:10:40.497 "abort": true, 00:10:40.497 "seek_hole": false, 00:10:40.497 "seek_data": false, 00:10:40.498 "copy": true, 00:10:40.498 "nvme_iov_md": false 00:10:40.498 }, 00:10:40.498 "memory_domains": [ 00:10:40.498 { 00:10:40.498 "dma_device_id": "system", 00:10:40.498 "dma_device_type": 1 00:10:40.498 }, 00:10:40.498 { 00:10:40.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.498 "dma_device_type": 2 00:10:40.498 } 00:10:40.498 ], 00:10:40.498 "driver_specific": {} 00:10:40.498 } 00:10:40.498 ] 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.498 "name": "Existed_Raid", 00:10:40.498 "uuid": "4a727ff5-8a60-48ea-b5e2-239be7d68a80", 00:10:40.498 "strip_size_kb": 64, 00:10:40.498 "state": "online", 00:10:40.498 "raid_level": "raid0", 00:10:40.498 "superblock": false, 00:10:40.498 "num_base_bdevs": 4, 00:10:40.498 "num_base_bdevs_discovered": 4, 00:10:40.498 "num_base_bdevs_operational": 4, 00:10:40.498 "base_bdevs_list": [ 00:10:40.498 { 00:10:40.498 "name": "NewBaseBdev", 00:10:40.498 "uuid": "4280695e-2443-47a0-b1b8-3f9202ff76e4", 00:10:40.498 "is_configured": true, 00:10:40.498 "data_offset": 0, 00:10:40.498 "data_size": 65536 00:10:40.498 }, 00:10:40.498 { 00:10:40.498 "name": "BaseBdev2", 00:10:40.498 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:40.498 "is_configured": true, 00:10:40.498 "data_offset": 0, 00:10:40.498 "data_size": 65536 00:10:40.498 }, 00:10:40.498 { 00:10:40.498 "name": "BaseBdev3", 00:10:40.498 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:40.498 "is_configured": true, 00:10:40.498 "data_offset": 0, 00:10:40.498 "data_size": 65536 00:10:40.498 }, 00:10:40.498 { 00:10:40.498 "name": "BaseBdev4", 00:10:40.498 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:40.498 "is_configured": true, 00:10:40.498 "data_offset": 0, 00:10:40.498 "data_size": 65536 00:10:40.498 } 00:10:40.498 ] 00:10:40.498 }' 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.498 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.066 [2024-11-26 18:57:32.269294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.066 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.066 "name": "Existed_Raid", 00:10:41.066 "aliases": [ 00:10:41.066 "4a727ff5-8a60-48ea-b5e2-239be7d68a80" 00:10:41.066 ], 00:10:41.066 "product_name": "Raid Volume", 00:10:41.066 "block_size": 512, 00:10:41.066 "num_blocks": 262144, 00:10:41.066 "uuid": "4a727ff5-8a60-48ea-b5e2-239be7d68a80", 00:10:41.066 "assigned_rate_limits": { 00:10:41.066 "rw_ios_per_sec": 0, 00:10:41.066 "rw_mbytes_per_sec": 0, 00:10:41.066 "r_mbytes_per_sec": 0, 00:10:41.066 "w_mbytes_per_sec": 0 00:10:41.066 }, 00:10:41.066 "claimed": false, 00:10:41.066 "zoned": false, 00:10:41.066 "supported_io_types": { 00:10:41.066 "read": true, 00:10:41.066 "write": true, 00:10:41.066 "unmap": true, 00:10:41.066 "flush": true, 00:10:41.066 "reset": true, 00:10:41.066 "nvme_admin": false, 00:10:41.066 "nvme_io": false, 00:10:41.066 "nvme_io_md": false, 00:10:41.066 "write_zeroes": true, 00:10:41.066 "zcopy": false, 00:10:41.066 "get_zone_info": false, 00:10:41.066 "zone_management": false, 00:10:41.066 "zone_append": false, 00:10:41.066 "compare": false, 00:10:41.066 "compare_and_write": false, 00:10:41.066 "abort": false, 00:10:41.066 "seek_hole": false, 00:10:41.066 "seek_data": false, 00:10:41.066 "copy": false, 00:10:41.066 "nvme_iov_md": false 00:10:41.066 }, 00:10:41.066 "memory_domains": [ 00:10:41.066 { 00:10:41.066 "dma_device_id": "system", 00:10:41.066 "dma_device_type": 1 00:10:41.066 }, 00:10:41.066 { 00:10:41.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.066 "dma_device_type": 2 00:10:41.066 }, 00:10:41.066 { 00:10:41.066 "dma_device_id": "system", 00:10:41.066 "dma_device_type": 1 00:10:41.066 }, 00:10:41.066 { 00:10:41.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.066 "dma_device_type": 2 00:10:41.066 }, 00:10:41.066 { 00:10:41.066 "dma_device_id": "system", 00:10:41.066 "dma_device_type": 1 00:10:41.066 }, 00:10:41.066 { 00:10:41.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.066 "dma_device_type": 2 00:10:41.066 }, 00:10:41.066 { 00:10:41.066 "dma_device_id": "system", 00:10:41.066 "dma_device_type": 1 00:10:41.066 }, 00:10:41.066 { 00:10:41.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.066 "dma_device_type": 2 00:10:41.066 } 00:10:41.066 ], 00:10:41.066 "driver_specific": { 00:10:41.066 "raid": { 00:10:41.066 "uuid": "4a727ff5-8a60-48ea-b5e2-239be7d68a80", 00:10:41.066 "strip_size_kb": 64, 00:10:41.066 "state": "online", 00:10:41.066 "raid_level": "raid0", 00:10:41.066 "superblock": false, 00:10:41.066 "num_base_bdevs": 4, 00:10:41.066 "num_base_bdevs_discovered": 4, 00:10:41.066 "num_base_bdevs_operational": 4, 00:10:41.066 "base_bdevs_list": [ 00:10:41.066 { 00:10:41.066 "name": "NewBaseBdev", 00:10:41.066 "uuid": "4280695e-2443-47a0-b1b8-3f9202ff76e4", 00:10:41.067 "is_configured": true, 00:10:41.067 "data_offset": 0, 00:10:41.067 "data_size": 65536 00:10:41.067 }, 00:10:41.067 { 00:10:41.067 "name": "BaseBdev2", 00:10:41.067 "uuid": "508626a2-2afb-4f6b-91a0-ec28a28b729a", 00:10:41.067 "is_configured": true, 00:10:41.067 "data_offset": 0, 00:10:41.067 "data_size": 65536 00:10:41.067 }, 00:10:41.067 { 00:10:41.067 "name": "BaseBdev3", 00:10:41.067 "uuid": "a731ef13-1d70-4458-8d11-a981ea68642c", 00:10:41.067 "is_configured": true, 00:10:41.067 "data_offset": 0, 00:10:41.067 "data_size": 65536 00:10:41.067 }, 00:10:41.067 { 00:10:41.067 "name": "BaseBdev4", 00:10:41.067 "uuid": "8170853d-55d0-48e4-bfa5-b7a48095e651", 00:10:41.067 "is_configured": true, 00:10:41.067 "data_offset": 0, 00:10:41.067 "data_size": 65536 00:10:41.067 } 00:10:41.067 ] 00:10:41.067 } 00:10:41.067 } 00:10:41.067 }' 00:10:41.067 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.067 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:41.067 BaseBdev2 00:10:41.067 BaseBdev3 00:10:41.067 BaseBdev4' 00:10:41.067 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.067 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.067 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.067 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.067 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:41.067 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.067 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.326 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.326 [2024-11-26 18:57:32.640824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.326 [2024-11-26 18:57:32.640860] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.327 [2024-11-26 18:57:32.641009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.327 [2024-11-26 18:57:32.641106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.327 [2024-11-26 18:57:32.641123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69497 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69497 ']' 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69497 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69497 00:10:41.327 killing process with pid 69497 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69497' 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69497 00:10:41.327 [2024-11-26 18:57:32.681363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.327 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69497 00:10:41.895 [2024-11-26 18:57:33.041076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.831 18:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:42.831 00:10:42.831 real 0m12.988s 00:10:42.831 user 0m21.502s 00:10:42.831 sys 0m1.788s 00:10:42.831 18:57:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.831 ************************************ 00:10:42.831 END TEST raid_state_function_test 00:10:42.831 ************************************ 00:10:42.831 18:57:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.091 18:57:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:43.091 18:57:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.091 18:57:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.091 18:57:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.091 ************************************ 00:10:43.091 START TEST raid_state_function_test_sb 00:10:43.091 ************************************ 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:43.091 Process raid pid: 70181 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70181 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70181' 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70181 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70181 ']' 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.091 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.091 [2024-11-26 18:57:34.338865] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:10:43.091 [2024-11-26 18:57:34.339312] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.350 [2024-11-26 18:57:34.526053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.351 [2024-11-26 18:57:34.664179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.610 [2024-11-26 18:57:34.882909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.610 [2024-11-26 18:57:34.882970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.177 [2024-11-26 18:57:35.369953] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.177 [2024-11-26 18:57:35.370178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.177 [2024-11-26 18:57:35.370368] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.177 [2024-11-26 18:57:35.370538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.177 [2024-11-26 18:57:35.370562] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.177 [2024-11-26 18:57:35.370579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.177 [2024-11-26 18:57:35.370589] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.177 [2024-11-26 18:57:35.370603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.177 "name": "Existed_Raid", 00:10:44.177 "uuid": "322132c3-7a82-496e-be79-481ea2e97faa", 00:10:44.177 "strip_size_kb": 64, 00:10:44.177 "state": "configuring", 00:10:44.177 "raid_level": "raid0", 00:10:44.177 "superblock": true, 00:10:44.177 "num_base_bdevs": 4, 00:10:44.177 "num_base_bdevs_discovered": 0, 00:10:44.177 "num_base_bdevs_operational": 4, 00:10:44.177 "base_bdevs_list": [ 00:10:44.177 { 00:10:44.177 "name": "BaseBdev1", 00:10:44.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.177 "is_configured": false, 00:10:44.177 "data_offset": 0, 00:10:44.177 "data_size": 0 00:10:44.177 }, 00:10:44.177 { 00:10:44.177 "name": "BaseBdev2", 00:10:44.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.177 "is_configured": false, 00:10:44.177 "data_offset": 0, 00:10:44.177 "data_size": 0 00:10:44.177 }, 00:10:44.177 { 00:10:44.177 "name": "BaseBdev3", 00:10:44.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.177 "is_configured": false, 00:10:44.177 "data_offset": 0, 00:10:44.177 "data_size": 0 00:10:44.177 }, 00:10:44.177 { 00:10:44.177 "name": "BaseBdev4", 00:10:44.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.177 "is_configured": false, 00:10:44.177 "data_offset": 0, 00:10:44.177 "data_size": 0 00:10:44.177 } 00:10:44.177 ] 00:10:44.177 }' 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.177 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 [2024-11-26 18:57:35.897956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.745 [2024-11-26 18:57:35.898140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 [2024-11-26 18:57:35.905956] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.745 [2024-11-26 18:57:35.906133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.745 [2024-11-26 18:57:35.906260] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.745 [2024-11-26 18:57:35.906321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.745 [2024-11-26 18:57:35.906428] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.745 [2024-11-26 18:57:35.906574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.745 [2024-11-26 18:57:35.906704] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.745 [2024-11-26 18:57:35.906772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 [2024-11-26 18:57:35.951081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.745 BaseBdev1 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 [ 00:10:44.745 { 00:10:44.745 "name": "BaseBdev1", 00:10:44.745 "aliases": [ 00:10:44.745 "646e647b-80f0-4788-8579-15e8bb7afb97" 00:10:44.745 ], 00:10:44.745 "product_name": "Malloc disk", 00:10:44.745 "block_size": 512, 00:10:44.745 "num_blocks": 65536, 00:10:44.745 "uuid": "646e647b-80f0-4788-8579-15e8bb7afb97", 00:10:44.745 "assigned_rate_limits": { 00:10:44.745 "rw_ios_per_sec": 0, 00:10:44.745 "rw_mbytes_per_sec": 0, 00:10:44.745 "r_mbytes_per_sec": 0, 00:10:44.745 "w_mbytes_per_sec": 0 00:10:44.745 }, 00:10:44.745 "claimed": true, 00:10:44.745 "claim_type": "exclusive_write", 00:10:44.745 "zoned": false, 00:10:44.745 "supported_io_types": { 00:10:44.745 "read": true, 00:10:44.745 "write": true, 00:10:44.745 "unmap": true, 00:10:44.745 "flush": true, 00:10:44.745 "reset": true, 00:10:44.745 "nvme_admin": false, 00:10:44.745 "nvme_io": false, 00:10:44.745 "nvme_io_md": false, 00:10:44.745 "write_zeroes": true, 00:10:44.745 "zcopy": true, 00:10:44.745 "get_zone_info": false, 00:10:44.745 "zone_management": false, 00:10:44.745 "zone_append": false, 00:10:44.745 "compare": false, 00:10:44.745 "compare_and_write": false, 00:10:44.745 "abort": true, 00:10:44.745 "seek_hole": false, 00:10:44.745 "seek_data": false, 00:10:44.745 "copy": true, 00:10:44.745 "nvme_iov_md": false 00:10:44.745 }, 00:10:44.745 "memory_domains": [ 00:10:44.745 { 00:10:44.745 "dma_device_id": "system", 00:10:44.745 "dma_device_type": 1 00:10:44.745 }, 00:10:44.745 { 00:10:44.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.745 "dma_device_type": 2 00:10:44.745 } 00:10:44.745 ], 00:10:44.745 "driver_specific": {} 00:10:44.745 } 00:10:44.745 ] 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.745 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.745 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.745 "name": "Existed_Raid", 00:10:44.745 "uuid": "732ac42a-0bfc-425e-8c93-2060346301f7", 00:10:44.745 "strip_size_kb": 64, 00:10:44.745 "state": "configuring", 00:10:44.745 "raid_level": "raid0", 00:10:44.745 "superblock": true, 00:10:44.745 "num_base_bdevs": 4, 00:10:44.745 "num_base_bdevs_discovered": 1, 00:10:44.745 "num_base_bdevs_operational": 4, 00:10:44.745 "base_bdevs_list": [ 00:10:44.745 { 00:10:44.745 "name": "BaseBdev1", 00:10:44.745 "uuid": "646e647b-80f0-4788-8579-15e8bb7afb97", 00:10:44.745 "is_configured": true, 00:10:44.745 "data_offset": 2048, 00:10:44.745 "data_size": 63488 00:10:44.745 }, 00:10:44.745 { 00:10:44.745 "name": "BaseBdev2", 00:10:44.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.745 "is_configured": false, 00:10:44.745 "data_offset": 0, 00:10:44.745 "data_size": 0 00:10:44.745 }, 00:10:44.745 { 00:10:44.745 "name": "BaseBdev3", 00:10:44.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.745 "is_configured": false, 00:10:44.745 "data_offset": 0, 00:10:44.745 "data_size": 0 00:10:44.745 }, 00:10:44.745 { 00:10:44.745 "name": "BaseBdev4", 00:10:44.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.745 "is_configured": false, 00:10:44.745 "data_offset": 0, 00:10:44.745 "data_size": 0 00:10:44.745 } 00:10:44.745 ] 00:10:44.745 }' 00:10:44.745 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.745 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.314 [2024-11-26 18:57:36.471313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.314 [2024-11-26 18:57:36.471518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.314 [2024-11-26 18:57:36.479350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.314 [2024-11-26 18:57:36.481959] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.314 [2024-11-26 18:57:36.482139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.314 [2024-11-26 18:57:36.482264] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.314 [2024-11-26 18:57:36.482329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.314 [2024-11-26 18:57:36.482556] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.314 [2024-11-26 18:57:36.482618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.314 "name": "Existed_Raid", 00:10:45.314 "uuid": "2d4b69bc-0324-4ef4-adeb-aa3147ab8e7d", 00:10:45.314 "strip_size_kb": 64, 00:10:45.314 "state": "configuring", 00:10:45.314 "raid_level": "raid0", 00:10:45.314 "superblock": true, 00:10:45.314 "num_base_bdevs": 4, 00:10:45.314 "num_base_bdevs_discovered": 1, 00:10:45.314 "num_base_bdevs_operational": 4, 00:10:45.314 "base_bdevs_list": [ 00:10:45.314 { 00:10:45.314 "name": "BaseBdev1", 00:10:45.314 "uuid": "646e647b-80f0-4788-8579-15e8bb7afb97", 00:10:45.314 "is_configured": true, 00:10:45.314 "data_offset": 2048, 00:10:45.314 "data_size": 63488 00:10:45.314 }, 00:10:45.314 { 00:10:45.314 "name": "BaseBdev2", 00:10:45.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.314 "is_configured": false, 00:10:45.314 "data_offset": 0, 00:10:45.314 "data_size": 0 00:10:45.314 }, 00:10:45.314 { 00:10:45.314 "name": "BaseBdev3", 00:10:45.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.314 "is_configured": false, 00:10:45.314 "data_offset": 0, 00:10:45.314 "data_size": 0 00:10:45.314 }, 00:10:45.314 { 00:10:45.314 "name": "BaseBdev4", 00:10:45.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.314 "is_configured": false, 00:10:45.314 "data_offset": 0, 00:10:45.314 "data_size": 0 00:10:45.314 } 00:10:45.314 ] 00:10:45.314 }' 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.314 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.881 [2024-11-26 18:57:37.046737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.881 BaseBdev2 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.881 [ 00:10:45.881 { 00:10:45.881 "name": "BaseBdev2", 00:10:45.881 "aliases": [ 00:10:45.881 "2f06d0ce-1b4f-4055-929d-f372fde9e47d" 00:10:45.881 ], 00:10:45.881 "product_name": "Malloc disk", 00:10:45.881 "block_size": 512, 00:10:45.881 "num_blocks": 65536, 00:10:45.881 "uuid": "2f06d0ce-1b4f-4055-929d-f372fde9e47d", 00:10:45.881 "assigned_rate_limits": { 00:10:45.881 "rw_ios_per_sec": 0, 00:10:45.881 "rw_mbytes_per_sec": 0, 00:10:45.881 "r_mbytes_per_sec": 0, 00:10:45.881 "w_mbytes_per_sec": 0 00:10:45.881 }, 00:10:45.881 "claimed": true, 00:10:45.881 "claim_type": "exclusive_write", 00:10:45.881 "zoned": false, 00:10:45.881 "supported_io_types": { 00:10:45.881 "read": true, 00:10:45.881 "write": true, 00:10:45.881 "unmap": true, 00:10:45.881 "flush": true, 00:10:45.881 "reset": true, 00:10:45.881 "nvme_admin": false, 00:10:45.881 "nvme_io": false, 00:10:45.881 "nvme_io_md": false, 00:10:45.881 "write_zeroes": true, 00:10:45.881 "zcopy": true, 00:10:45.881 "get_zone_info": false, 00:10:45.881 "zone_management": false, 00:10:45.881 "zone_append": false, 00:10:45.881 "compare": false, 00:10:45.881 "compare_and_write": false, 00:10:45.881 "abort": true, 00:10:45.881 "seek_hole": false, 00:10:45.881 "seek_data": false, 00:10:45.881 "copy": true, 00:10:45.881 "nvme_iov_md": false 00:10:45.881 }, 00:10:45.881 "memory_domains": [ 00:10:45.881 { 00:10:45.881 "dma_device_id": "system", 00:10:45.881 "dma_device_type": 1 00:10:45.881 }, 00:10:45.881 { 00:10:45.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.881 "dma_device_type": 2 00:10:45.881 } 00:10:45.881 ], 00:10:45.881 "driver_specific": {} 00:10:45.881 } 00:10:45.881 ] 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.881 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.882 "name": "Existed_Raid", 00:10:45.882 "uuid": "2d4b69bc-0324-4ef4-adeb-aa3147ab8e7d", 00:10:45.882 "strip_size_kb": 64, 00:10:45.882 "state": "configuring", 00:10:45.882 "raid_level": "raid0", 00:10:45.882 "superblock": true, 00:10:45.882 "num_base_bdevs": 4, 00:10:45.882 "num_base_bdevs_discovered": 2, 00:10:45.882 "num_base_bdevs_operational": 4, 00:10:45.882 "base_bdevs_list": [ 00:10:45.882 { 00:10:45.882 "name": "BaseBdev1", 00:10:45.882 "uuid": "646e647b-80f0-4788-8579-15e8bb7afb97", 00:10:45.882 "is_configured": true, 00:10:45.882 "data_offset": 2048, 00:10:45.882 "data_size": 63488 00:10:45.882 }, 00:10:45.882 { 00:10:45.882 "name": "BaseBdev2", 00:10:45.882 "uuid": "2f06d0ce-1b4f-4055-929d-f372fde9e47d", 00:10:45.882 "is_configured": true, 00:10:45.882 "data_offset": 2048, 00:10:45.882 "data_size": 63488 00:10:45.882 }, 00:10:45.882 { 00:10:45.882 "name": "BaseBdev3", 00:10:45.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.882 "is_configured": false, 00:10:45.882 "data_offset": 0, 00:10:45.882 "data_size": 0 00:10:45.882 }, 00:10:45.882 { 00:10:45.882 "name": "BaseBdev4", 00:10:45.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.882 "is_configured": false, 00:10:45.882 "data_offset": 0, 00:10:45.882 "data_size": 0 00:10:45.882 } 00:10:45.882 ] 00:10:45.882 }' 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.882 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.447 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.447 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.447 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.447 [2024-11-26 18:57:37.663209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.447 BaseBdev3 00:10:46.447 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.447 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:46.447 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:46.447 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.448 [ 00:10:46.448 { 00:10:46.448 "name": "BaseBdev3", 00:10:46.448 "aliases": [ 00:10:46.448 "558da00e-fed0-4947-b2f8-ca3e87037809" 00:10:46.448 ], 00:10:46.448 "product_name": "Malloc disk", 00:10:46.448 "block_size": 512, 00:10:46.448 "num_blocks": 65536, 00:10:46.448 "uuid": "558da00e-fed0-4947-b2f8-ca3e87037809", 00:10:46.448 "assigned_rate_limits": { 00:10:46.448 "rw_ios_per_sec": 0, 00:10:46.448 "rw_mbytes_per_sec": 0, 00:10:46.448 "r_mbytes_per_sec": 0, 00:10:46.448 "w_mbytes_per_sec": 0 00:10:46.448 }, 00:10:46.448 "claimed": true, 00:10:46.448 "claim_type": "exclusive_write", 00:10:46.448 "zoned": false, 00:10:46.448 "supported_io_types": { 00:10:46.448 "read": true, 00:10:46.448 "write": true, 00:10:46.448 "unmap": true, 00:10:46.448 "flush": true, 00:10:46.448 "reset": true, 00:10:46.448 "nvme_admin": false, 00:10:46.448 "nvme_io": false, 00:10:46.448 "nvme_io_md": false, 00:10:46.448 "write_zeroes": true, 00:10:46.448 "zcopy": true, 00:10:46.448 "get_zone_info": false, 00:10:46.448 "zone_management": false, 00:10:46.448 "zone_append": false, 00:10:46.448 "compare": false, 00:10:46.448 "compare_and_write": false, 00:10:46.448 "abort": true, 00:10:46.448 "seek_hole": false, 00:10:46.448 "seek_data": false, 00:10:46.448 "copy": true, 00:10:46.448 "nvme_iov_md": false 00:10:46.448 }, 00:10:46.448 "memory_domains": [ 00:10:46.448 { 00:10:46.448 "dma_device_id": "system", 00:10:46.448 "dma_device_type": 1 00:10:46.448 }, 00:10:46.448 { 00:10:46.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.448 "dma_device_type": 2 00:10:46.448 } 00:10:46.448 ], 00:10:46.448 "driver_specific": {} 00:10:46.448 } 00:10:46.448 ] 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.448 "name": "Existed_Raid", 00:10:46.448 "uuid": "2d4b69bc-0324-4ef4-adeb-aa3147ab8e7d", 00:10:46.448 "strip_size_kb": 64, 00:10:46.448 "state": "configuring", 00:10:46.448 "raid_level": "raid0", 00:10:46.448 "superblock": true, 00:10:46.448 "num_base_bdevs": 4, 00:10:46.448 "num_base_bdevs_discovered": 3, 00:10:46.448 "num_base_bdevs_operational": 4, 00:10:46.448 "base_bdevs_list": [ 00:10:46.448 { 00:10:46.448 "name": "BaseBdev1", 00:10:46.448 "uuid": "646e647b-80f0-4788-8579-15e8bb7afb97", 00:10:46.448 "is_configured": true, 00:10:46.448 "data_offset": 2048, 00:10:46.448 "data_size": 63488 00:10:46.448 }, 00:10:46.448 { 00:10:46.448 "name": "BaseBdev2", 00:10:46.448 "uuid": "2f06d0ce-1b4f-4055-929d-f372fde9e47d", 00:10:46.448 "is_configured": true, 00:10:46.448 "data_offset": 2048, 00:10:46.448 "data_size": 63488 00:10:46.448 }, 00:10:46.448 { 00:10:46.448 "name": "BaseBdev3", 00:10:46.448 "uuid": "558da00e-fed0-4947-b2f8-ca3e87037809", 00:10:46.448 "is_configured": true, 00:10:46.448 "data_offset": 2048, 00:10:46.448 "data_size": 63488 00:10:46.448 }, 00:10:46.448 { 00:10:46.448 "name": "BaseBdev4", 00:10:46.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.448 "is_configured": false, 00:10:46.448 "data_offset": 0, 00:10:46.448 "data_size": 0 00:10:46.448 } 00:10:46.448 ] 00:10:46.448 }' 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.448 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.021 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.021 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.021 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.021 [2024-11-26 18:57:38.267898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.021 [2024-11-26 18:57:38.268302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.021 [2024-11-26 18:57:38.268322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.021 BaseBdev4 00:10:47.021 [2024-11-26 18:57:38.268699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.021 [2024-11-26 18:57:38.268887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.021 [2024-11-26 18:57:38.268907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.021 [2024-11-26 18:57:38.269144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.021 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.022 [ 00:10:47.022 { 00:10:47.022 "name": "BaseBdev4", 00:10:47.022 "aliases": [ 00:10:47.022 "10c76535-fff5-4413-bf89-45bdfc29f511" 00:10:47.022 ], 00:10:47.022 "product_name": "Malloc disk", 00:10:47.022 "block_size": 512, 00:10:47.022 "num_blocks": 65536, 00:10:47.022 "uuid": "10c76535-fff5-4413-bf89-45bdfc29f511", 00:10:47.022 "assigned_rate_limits": { 00:10:47.022 "rw_ios_per_sec": 0, 00:10:47.022 "rw_mbytes_per_sec": 0, 00:10:47.022 "r_mbytes_per_sec": 0, 00:10:47.022 "w_mbytes_per_sec": 0 00:10:47.022 }, 00:10:47.022 "claimed": true, 00:10:47.022 "claim_type": "exclusive_write", 00:10:47.022 "zoned": false, 00:10:47.022 "supported_io_types": { 00:10:47.022 "read": true, 00:10:47.022 "write": true, 00:10:47.022 "unmap": true, 00:10:47.022 "flush": true, 00:10:47.022 "reset": true, 00:10:47.022 "nvme_admin": false, 00:10:47.022 "nvme_io": false, 00:10:47.022 "nvme_io_md": false, 00:10:47.022 "write_zeroes": true, 00:10:47.022 "zcopy": true, 00:10:47.022 "get_zone_info": false, 00:10:47.022 "zone_management": false, 00:10:47.022 "zone_append": false, 00:10:47.022 "compare": false, 00:10:47.022 "compare_and_write": false, 00:10:47.022 "abort": true, 00:10:47.022 "seek_hole": false, 00:10:47.022 "seek_data": false, 00:10:47.022 "copy": true, 00:10:47.022 "nvme_iov_md": false 00:10:47.022 }, 00:10:47.022 "memory_domains": [ 00:10:47.022 { 00:10:47.022 "dma_device_id": "system", 00:10:47.022 "dma_device_type": 1 00:10:47.022 }, 00:10:47.022 { 00:10:47.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.022 "dma_device_type": 2 00:10:47.022 } 00:10:47.022 ], 00:10:47.022 "driver_specific": {} 00:10:47.022 } 00:10:47.022 ] 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.022 "name": "Existed_Raid", 00:10:47.022 "uuid": "2d4b69bc-0324-4ef4-adeb-aa3147ab8e7d", 00:10:47.022 "strip_size_kb": 64, 00:10:47.022 "state": "online", 00:10:47.022 "raid_level": "raid0", 00:10:47.022 "superblock": true, 00:10:47.022 "num_base_bdevs": 4, 00:10:47.022 "num_base_bdevs_discovered": 4, 00:10:47.022 "num_base_bdevs_operational": 4, 00:10:47.022 "base_bdevs_list": [ 00:10:47.022 { 00:10:47.022 "name": "BaseBdev1", 00:10:47.022 "uuid": "646e647b-80f0-4788-8579-15e8bb7afb97", 00:10:47.022 "is_configured": true, 00:10:47.022 "data_offset": 2048, 00:10:47.022 "data_size": 63488 00:10:47.022 }, 00:10:47.022 { 00:10:47.022 "name": "BaseBdev2", 00:10:47.022 "uuid": "2f06d0ce-1b4f-4055-929d-f372fde9e47d", 00:10:47.022 "is_configured": true, 00:10:47.022 "data_offset": 2048, 00:10:47.022 "data_size": 63488 00:10:47.022 }, 00:10:47.022 { 00:10:47.022 "name": "BaseBdev3", 00:10:47.022 "uuid": "558da00e-fed0-4947-b2f8-ca3e87037809", 00:10:47.022 "is_configured": true, 00:10:47.022 "data_offset": 2048, 00:10:47.022 "data_size": 63488 00:10:47.022 }, 00:10:47.022 { 00:10:47.022 "name": "BaseBdev4", 00:10:47.022 "uuid": "10c76535-fff5-4413-bf89-45bdfc29f511", 00:10:47.022 "is_configured": true, 00:10:47.022 "data_offset": 2048, 00:10:47.022 "data_size": 63488 00:10:47.022 } 00:10:47.022 ] 00:10:47.022 }' 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.022 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.612 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.612 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.612 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.612 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.612 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.612 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.612 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.612 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.613 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.613 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.613 [2024-11-26 18:57:38.836557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.613 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.613 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.613 "name": "Existed_Raid", 00:10:47.613 "aliases": [ 00:10:47.613 "2d4b69bc-0324-4ef4-adeb-aa3147ab8e7d" 00:10:47.613 ], 00:10:47.613 "product_name": "Raid Volume", 00:10:47.613 "block_size": 512, 00:10:47.613 "num_blocks": 253952, 00:10:47.613 "uuid": "2d4b69bc-0324-4ef4-adeb-aa3147ab8e7d", 00:10:47.613 "assigned_rate_limits": { 00:10:47.613 "rw_ios_per_sec": 0, 00:10:47.613 "rw_mbytes_per_sec": 0, 00:10:47.613 "r_mbytes_per_sec": 0, 00:10:47.613 "w_mbytes_per_sec": 0 00:10:47.613 }, 00:10:47.613 "claimed": false, 00:10:47.613 "zoned": false, 00:10:47.613 "supported_io_types": { 00:10:47.613 "read": true, 00:10:47.613 "write": true, 00:10:47.613 "unmap": true, 00:10:47.613 "flush": true, 00:10:47.613 "reset": true, 00:10:47.613 "nvme_admin": false, 00:10:47.613 "nvme_io": false, 00:10:47.613 "nvme_io_md": false, 00:10:47.613 "write_zeroes": true, 00:10:47.613 "zcopy": false, 00:10:47.613 "get_zone_info": false, 00:10:47.613 "zone_management": false, 00:10:47.613 "zone_append": false, 00:10:47.613 "compare": false, 00:10:47.613 "compare_and_write": false, 00:10:47.613 "abort": false, 00:10:47.613 "seek_hole": false, 00:10:47.613 "seek_data": false, 00:10:47.613 "copy": false, 00:10:47.613 "nvme_iov_md": false 00:10:47.613 }, 00:10:47.613 "memory_domains": [ 00:10:47.613 { 00:10:47.613 "dma_device_id": "system", 00:10:47.613 "dma_device_type": 1 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.613 "dma_device_type": 2 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "dma_device_id": "system", 00:10:47.613 "dma_device_type": 1 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.613 "dma_device_type": 2 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "dma_device_id": "system", 00:10:47.613 "dma_device_type": 1 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.613 "dma_device_type": 2 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "dma_device_id": "system", 00:10:47.613 "dma_device_type": 1 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.613 "dma_device_type": 2 00:10:47.613 } 00:10:47.613 ], 00:10:47.613 "driver_specific": { 00:10:47.613 "raid": { 00:10:47.613 "uuid": "2d4b69bc-0324-4ef4-adeb-aa3147ab8e7d", 00:10:47.613 "strip_size_kb": 64, 00:10:47.613 "state": "online", 00:10:47.613 "raid_level": "raid0", 00:10:47.613 "superblock": true, 00:10:47.613 "num_base_bdevs": 4, 00:10:47.613 "num_base_bdevs_discovered": 4, 00:10:47.613 "num_base_bdevs_operational": 4, 00:10:47.613 "base_bdevs_list": [ 00:10:47.613 { 00:10:47.613 "name": "BaseBdev1", 00:10:47.613 "uuid": "646e647b-80f0-4788-8579-15e8bb7afb97", 00:10:47.613 "is_configured": true, 00:10:47.613 "data_offset": 2048, 00:10:47.613 "data_size": 63488 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "name": "BaseBdev2", 00:10:47.613 "uuid": "2f06d0ce-1b4f-4055-929d-f372fde9e47d", 00:10:47.613 "is_configured": true, 00:10:47.613 "data_offset": 2048, 00:10:47.613 "data_size": 63488 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "name": "BaseBdev3", 00:10:47.613 "uuid": "558da00e-fed0-4947-b2f8-ca3e87037809", 00:10:47.613 "is_configured": true, 00:10:47.613 "data_offset": 2048, 00:10:47.613 "data_size": 63488 00:10:47.613 }, 00:10:47.613 { 00:10:47.613 "name": "BaseBdev4", 00:10:47.613 "uuid": "10c76535-fff5-4413-bf89-45bdfc29f511", 00:10:47.613 "is_configured": true, 00:10:47.613 "data_offset": 2048, 00:10:47.613 "data_size": 63488 00:10:47.613 } 00:10:47.613 ] 00:10:47.613 } 00:10:47.613 } 00:10:47.613 }' 00:10:47.613 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.613 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:47.613 BaseBdev2 00:10:47.613 BaseBdev3 00:10:47.613 BaseBdev4' 00:10:47.613 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.872 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.872 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.872 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:47.872 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.872 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.872 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.872 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.872 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.872 [2024-11-26 18:57:39.192291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.872 [2024-11-26 18:57:39.192473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.872 [2024-11-26 18:57:39.192650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.129 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.130 "name": "Existed_Raid", 00:10:48.130 "uuid": "2d4b69bc-0324-4ef4-adeb-aa3147ab8e7d", 00:10:48.130 "strip_size_kb": 64, 00:10:48.130 "state": "offline", 00:10:48.130 "raid_level": "raid0", 00:10:48.130 "superblock": true, 00:10:48.130 "num_base_bdevs": 4, 00:10:48.130 "num_base_bdevs_discovered": 3, 00:10:48.130 "num_base_bdevs_operational": 3, 00:10:48.130 "base_bdevs_list": [ 00:10:48.130 { 00:10:48.130 "name": null, 00:10:48.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.130 "is_configured": false, 00:10:48.130 "data_offset": 0, 00:10:48.130 "data_size": 63488 00:10:48.130 }, 00:10:48.130 { 00:10:48.130 "name": "BaseBdev2", 00:10:48.130 "uuid": "2f06d0ce-1b4f-4055-929d-f372fde9e47d", 00:10:48.130 "is_configured": true, 00:10:48.130 "data_offset": 2048, 00:10:48.130 "data_size": 63488 00:10:48.130 }, 00:10:48.130 { 00:10:48.130 "name": "BaseBdev3", 00:10:48.130 "uuid": "558da00e-fed0-4947-b2f8-ca3e87037809", 00:10:48.130 "is_configured": true, 00:10:48.130 "data_offset": 2048, 00:10:48.130 "data_size": 63488 00:10:48.130 }, 00:10:48.130 { 00:10:48.130 "name": "BaseBdev4", 00:10:48.130 "uuid": "10c76535-fff5-4413-bf89-45bdfc29f511", 00:10:48.130 "is_configured": true, 00:10:48.130 "data_offset": 2048, 00:10:48.130 "data_size": 63488 00:10:48.130 } 00:10:48.130 ] 00:10:48.130 }' 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.130 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.696 [2024-11-26 18:57:39.835404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.696 18:57:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.696 [2024-11-26 18:57:39.984985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.955 [2024-11-26 18:57:40.133169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:48.955 [2024-11-26 18:57:40.133364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.955 BaseBdev2 00:10:48.955 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.214 [ 00:10:49.214 { 00:10:49.214 "name": "BaseBdev2", 00:10:49.214 "aliases": [ 00:10:49.214 "a1587a83-c3b3-442d-9e1e-a833e45582d1" 00:10:49.214 ], 00:10:49.214 "product_name": "Malloc disk", 00:10:49.214 "block_size": 512, 00:10:49.214 "num_blocks": 65536, 00:10:49.214 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:49.214 "assigned_rate_limits": { 00:10:49.214 "rw_ios_per_sec": 0, 00:10:49.214 "rw_mbytes_per_sec": 0, 00:10:49.214 "r_mbytes_per_sec": 0, 00:10:49.214 "w_mbytes_per_sec": 0 00:10:49.214 }, 00:10:49.214 "claimed": false, 00:10:49.214 "zoned": false, 00:10:49.214 "supported_io_types": { 00:10:49.214 "read": true, 00:10:49.214 "write": true, 00:10:49.214 "unmap": true, 00:10:49.214 "flush": true, 00:10:49.214 "reset": true, 00:10:49.214 "nvme_admin": false, 00:10:49.214 "nvme_io": false, 00:10:49.214 "nvme_io_md": false, 00:10:49.214 "write_zeroes": true, 00:10:49.214 "zcopy": true, 00:10:49.214 "get_zone_info": false, 00:10:49.214 "zone_management": false, 00:10:49.214 "zone_append": false, 00:10:49.214 "compare": false, 00:10:49.214 "compare_and_write": false, 00:10:49.214 "abort": true, 00:10:49.214 "seek_hole": false, 00:10:49.214 "seek_data": false, 00:10:49.214 "copy": true, 00:10:49.214 "nvme_iov_md": false 00:10:49.214 }, 00:10:49.214 "memory_domains": [ 00:10:49.214 { 00:10:49.214 "dma_device_id": "system", 00:10:49.214 "dma_device_type": 1 00:10:49.214 }, 00:10:49.214 { 00:10:49.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.214 "dma_device_type": 2 00:10:49.214 } 00:10:49.214 ], 00:10:49.214 "driver_specific": {} 00:10:49.214 } 00:10:49.214 ] 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.214 BaseBdev3 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.214 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.214 [ 00:10:49.214 { 00:10:49.214 "name": "BaseBdev3", 00:10:49.214 "aliases": [ 00:10:49.214 "474efc16-86f3-4036-ba1f-adadf7ba4ddf" 00:10:49.214 ], 00:10:49.214 "product_name": "Malloc disk", 00:10:49.214 "block_size": 512, 00:10:49.214 "num_blocks": 65536, 00:10:49.214 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:49.214 "assigned_rate_limits": { 00:10:49.214 "rw_ios_per_sec": 0, 00:10:49.214 "rw_mbytes_per_sec": 0, 00:10:49.214 "r_mbytes_per_sec": 0, 00:10:49.214 "w_mbytes_per_sec": 0 00:10:49.214 }, 00:10:49.214 "claimed": false, 00:10:49.214 "zoned": false, 00:10:49.214 "supported_io_types": { 00:10:49.214 "read": true, 00:10:49.214 "write": true, 00:10:49.214 "unmap": true, 00:10:49.214 "flush": true, 00:10:49.214 "reset": true, 00:10:49.214 "nvme_admin": false, 00:10:49.214 "nvme_io": false, 00:10:49.214 "nvme_io_md": false, 00:10:49.214 "write_zeroes": true, 00:10:49.214 "zcopy": true, 00:10:49.214 "get_zone_info": false, 00:10:49.214 "zone_management": false, 00:10:49.214 "zone_append": false, 00:10:49.214 "compare": false, 00:10:49.214 "compare_and_write": false, 00:10:49.214 "abort": true, 00:10:49.214 "seek_hole": false, 00:10:49.214 "seek_data": false, 00:10:49.214 "copy": true, 00:10:49.214 "nvme_iov_md": false 00:10:49.214 }, 00:10:49.215 "memory_domains": [ 00:10:49.215 { 00:10:49.215 "dma_device_id": "system", 00:10:49.215 "dma_device_type": 1 00:10:49.215 }, 00:10:49.215 { 00:10:49.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.215 "dma_device_type": 2 00:10:49.215 } 00:10:49.215 ], 00:10:49.215 "driver_specific": {} 00:10:49.215 } 00:10:49.215 ] 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.215 BaseBdev4 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.215 [ 00:10:49.215 { 00:10:49.215 "name": "BaseBdev4", 00:10:49.215 "aliases": [ 00:10:49.215 "11f1871f-c683-4cc8-8ba0-75768eda1745" 00:10:49.215 ], 00:10:49.215 "product_name": "Malloc disk", 00:10:49.215 "block_size": 512, 00:10:49.215 "num_blocks": 65536, 00:10:49.215 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:49.215 "assigned_rate_limits": { 00:10:49.215 "rw_ios_per_sec": 0, 00:10:49.215 "rw_mbytes_per_sec": 0, 00:10:49.215 "r_mbytes_per_sec": 0, 00:10:49.215 "w_mbytes_per_sec": 0 00:10:49.215 }, 00:10:49.215 "claimed": false, 00:10:49.215 "zoned": false, 00:10:49.215 "supported_io_types": { 00:10:49.215 "read": true, 00:10:49.215 "write": true, 00:10:49.215 "unmap": true, 00:10:49.215 "flush": true, 00:10:49.215 "reset": true, 00:10:49.215 "nvme_admin": false, 00:10:49.215 "nvme_io": false, 00:10:49.215 "nvme_io_md": false, 00:10:49.215 "write_zeroes": true, 00:10:49.215 "zcopy": true, 00:10:49.215 "get_zone_info": false, 00:10:49.215 "zone_management": false, 00:10:49.215 "zone_append": false, 00:10:49.215 "compare": false, 00:10:49.215 "compare_and_write": false, 00:10:49.215 "abort": true, 00:10:49.215 "seek_hole": false, 00:10:49.215 "seek_data": false, 00:10:49.215 "copy": true, 00:10:49.215 "nvme_iov_md": false 00:10:49.215 }, 00:10:49.215 "memory_domains": [ 00:10:49.215 { 00:10:49.215 "dma_device_id": "system", 00:10:49.215 "dma_device_type": 1 00:10:49.215 }, 00:10:49.215 { 00:10:49.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.215 "dma_device_type": 2 00:10:49.215 } 00:10:49.215 ], 00:10:49.215 "driver_specific": {} 00:10:49.215 } 00:10:49.215 ] 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.215 [2024-11-26 18:57:40.500220] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.215 [2024-11-26 18:57:40.500423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.215 [2024-11-26 18:57:40.500474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.215 [2024-11-26 18:57:40.503069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.215 [2024-11-26 18:57:40.503140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.215 "name": "Existed_Raid", 00:10:49.215 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:49.215 "strip_size_kb": 64, 00:10:49.215 "state": "configuring", 00:10:49.215 "raid_level": "raid0", 00:10:49.215 "superblock": true, 00:10:49.215 "num_base_bdevs": 4, 00:10:49.215 "num_base_bdevs_discovered": 3, 00:10:49.215 "num_base_bdevs_operational": 4, 00:10:49.215 "base_bdevs_list": [ 00:10:49.215 { 00:10:49.215 "name": "BaseBdev1", 00:10:49.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.215 "is_configured": false, 00:10:49.215 "data_offset": 0, 00:10:49.215 "data_size": 0 00:10:49.215 }, 00:10:49.215 { 00:10:49.215 "name": "BaseBdev2", 00:10:49.215 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:49.215 "is_configured": true, 00:10:49.215 "data_offset": 2048, 00:10:49.215 "data_size": 63488 00:10:49.215 }, 00:10:49.215 { 00:10:49.215 "name": "BaseBdev3", 00:10:49.215 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:49.215 "is_configured": true, 00:10:49.215 "data_offset": 2048, 00:10:49.215 "data_size": 63488 00:10:49.215 }, 00:10:49.215 { 00:10:49.215 "name": "BaseBdev4", 00:10:49.215 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:49.215 "is_configured": true, 00:10:49.215 "data_offset": 2048, 00:10:49.215 "data_size": 63488 00:10:49.215 } 00:10:49.215 ] 00:10:49.215 }' 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.215 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.783 [2024-11-26 18:57:41.032338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.783 "name": "Existed_Raid", 00:10:49.783 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:49.783 "strip_size_kb": 64, 00:10:49.783 "state": "configuring", 00:10:49.783 "raid_level": "raid0", 00:10:49.783 "superblock": true, 00:10:49.783 "num_base_bdevs": 4, 00:10:49.783 "num_base_bdevs_discovered": 2, 00:10:49.783 "num_base_bdevs_operational": 4, 00:10:49.783 "base_bdevs_list": [ 00:10:49.783 { 00:10:49.783 "name": "BaseBdev1", 00:10:49.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.783 "is_configured": false, 00:10:49.783 "data_offset": 0, 00:10:49.783 "data_size": 0 00:10:49.783 }, 00:10:49.783 { 00:10:49.783 "name": null, 00:10:49.783 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:49.783 "is_configured": false, 00:10:49.783 "data_offset": 0, 00:10:49.783 "data_size": 63488 00:10:49.783 }, 00:10:49.783 { 00:10:49.783 "name": "BaseBdev3", 00:10:49.783 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:49.783 "is_configured": true, 00:10:49.783 "data_offset": 2048, 00:10:49.783 "data_size": 63488 00:10:49.783 }, 00:10:49.783 { 00:10:49.783 "name": "BaseBdev4", 00:10:49.783 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:49.783 "is_configured": true, 00:10:49.783 "data_offset": 2048, 00:10:49.783 "data_size": 63488 00:10:49.783 } 00:10:49.783 ] 00:10:49.783 }' 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.783 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.351 [2024-11-26 18:57:41.630142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.351 BaseBdev1 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.351 [ 00:10:50.351 { 00:10:50.351 "name": "BaseBdev1", 00:10:50.351 "aliases": [ 00:10:50.351 "1775dc55-f1e5-479d-8229-002849a80352" 00:10:50.351 ], 00:10:50.351 "product_name": "Malloc disk", 00:10:50.351 "block_size": 512, 00:10:50.351 "num_blocks": 65536, 00:10:50.351 "uuid": "1775dc55-f1e5-479d-8229-002849a80352", 00:10:50.351 "assigned_rate_limits": { 00:10:50.351 "rw_ios_per_sec": 0, 00:10:50.351 "rw_mbytes_per_sec": 0, 00:10:50.351 "r_mbytes_per_sec": 0, 00:10:50.351 "w_mbytes_per_sec": 0 00:10:50.351 }, 00:10:50.351 "claimed": true, 00:10:50.351 "claim_type": "exclusive_write", 00:10:50.351 "zoned": false, 00:10:50.351 "supported_io_types": { 00:10:50.351 "read": true, 00:10:50.351 "write": true, 00:10:50.351 "unmap": true, 00:10:50.351 "flush": true, 00:10:50.351 "reset": true, 00:10:50.351 "nvme_admin": false, 00:10:50.351 "nvme_io": false, 00:10:50.351 "nvme_io_md": false, 00:10:50.351 "write_zeroes": true, 00:10:50.351 "zcopy": true, 00:10:50.351 "get_zone_info": false, 00:10:50.351 "zone_management": false, 00:10:50.351 "zone_append": false, 00:10:50.351 "compare": false, 00:10:50.351 "compare_and_write": false, 00:10:50.351 "abort": true, 00:10:50.351 "seek_hole": false, 00:10:50.351 "seek_data": false, 00:10:50.351 "copy": true, 00:10:50.351 "nvme_iov_md": false 00:10:50.351 }, 00:10:50.351 "memory_domains": [ 00:10:50.351 { 00:10:50.351 "dma_device_id": "system", 00:10:50.351 "dma_device_type": 1 00:10:50.351 }, 00:10:50.351 { 00:10:50.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.351 "dma_device_type": 2 00:10:50.351 } 00:10:50.351 ], 00:10:50.351 "driver_specific": {} 00:10:50.351 } 00:10:50.351 ] 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.351 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.609 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.609 "name": "Existed_Raid", 00:10:50.609 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:50.609 "strip_size_kb": 64, 00:10:50.609 "state": "configuring", 00:10:50.609 "raid_level": "raid0", 00:10:50.609 "superblock": true, 00:10:50.609 "num_base_bdevs": 4, 00:10:50.609 "num_base_bdevs_discovered": 3, 00:10:50.609 "num_base_bdevs_operational": 4, 00:10:50.609 "base_bdevs_list": [ 00:10:50.609 { 00:10:50.609 "name": "BaseBdev1", 00:10:50.609 "uuid": "1775dc55-f1e5-479d-8229-002849a80352", 00:10:50.609 "is_configured": true, 00:10:50.609 "data_offset": 2048, 00:10:50.609 "data_size": 63488 00:10:50.609 }, 00:10:50.609 { 00:10:50.609 "name": null, 00:10:50.609 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:50.609 "is_configured": false, 00:10:50.609 "data_offset": 0, 00:10:50.609 "data_size": 63488 00:10:50.609 }, 00:10:50.609 { 00:10:50.609 "name": "BaseBdev3", 00:10:50.609 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:50.609 "is_configured": true, 00:10:50.609 "data_offset": 2048, 00:10:50.609 "data_size": 63488 00:10:50.609 }, 00:10:50.609 { 00:10:50.609 "name": "BaseBdev4", 00:10:50.609 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:50.609 "is_configured": true, 00:10:50.609 "data_offset": 2048, 00:10:50.609 "data_size": 63488 00:10:50.609 } 00:10:50.609 ] 00:10:50.609 }' 00:10:50.610 18:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.610 18:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.868 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.868 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.868 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.868 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.868 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.868 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:50.868 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:50.868 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.868 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.868 [2024-11-26 18:57:42.226390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.127 "name": "Existed_Raid", 00:10:51.127 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:51.127 "strip_size_kb": 64, 00:10:51.127 "state": "configuring", 00:10:51.127 "raid_level": "raid0", 00:10:51.127 "superblock": true, 00:10:51.127 "num_base_bdevs": 4, 00:10:51.127 "num_base_bdevs_discovered": 2, 00:10:51.127 "num_base_bdevs_operational": 4, 00:10:51.127 "base_bdevs_list": [ 00:10:51.127 { 00:10:51.127 "name": "BaseBdev1", 00:10:51.127 "uuid": "1775dc55-f1e5-479d-8229-002849a80352", 00:10:51.127 "is_configured": true, 00:10:51.127 "data_offset": 2048, 00:10:51.127 "data_size": 63488 00:10:51.127 }, 00:10:51.127 { 00:10:51.127 "name": null, 00:10:51.127 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:51.127 "is_configured": false, 00:10:51.127 "data_offset": 0, 00:10:51.127 "data_size": 63488 00:10:51.127 }, 00:10:51.127 { 00:10:51.127 "name": null, 00:10:51.127 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:51.127 "is_configured": false, 00:10:51.127 "data_offset": 0, 00:10:51.127 "data_size": 63488 00:10:51.127 }, 00:10:51.127 { 00:10:51.127 "name": "BaseBdev4", 00:10:51.127 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:51.127 "is_configured": true, 00:10:51.127 "data_offset": 2048, 00:10:51.127 "data_size": 63488 00:10:51.127 } 00:10:51.127 ] 00:10:51.127 }' 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.127 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.385 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.385 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.385 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.385 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.385 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.713 [2024-11-26 18:57:42.786530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.713 "name": "Existed_Raid", 00:10:51.713 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:51.713 "strip_size_kb": 64, 00:10:51.713 "state": "configuring", 00:10:51.713 "raid_level": "raid0", 00:10:51.713 "superblock": true, 00:10:51.713 "num_base_bdevs": 4, 00:10:51.713 "num_base_bdevs_discovered": 3, 00:10:51.713 "num_base_bdevs_operational": 4, 00:10:51.713 "base_bdevs_list": [ 00:10:51.713 { 00:10:51.713 "name": "BaseBdev1", 00:10:51.713 "uuid": "1775dc55-f1e5-479d-8229-002849a80352", 00:10:51.713 "is_configured": true, 00:10:51.713 "data_offset": 2048, 00:10:51.713 "data_size": 63488 00:10:51.713 }, 00:10:51.713 { 00:10:51.713 "name": null, 00:10:51.713 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:51.713 "is_configured": false, 00:10:51.713 "data_offset": 0, 00:10:51.713 "data_size": 63488 00:10:51.713 }, 00:10:51.713 { 00:10:51.713 "name": "BaseBdev3", 00:10:51.713 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:51.713 "is_configured": true, 00:10:51.713 "data_offset": 2048, 00:10:51.713 "data_size": 63488 00:10:51.713 }, 00:10:51.713 { 00:10:51.713 "name": "BaseBdev4", 00:10:51.713 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:51.713 "is_configured": true, 00:10:51.713 "data_offset": 2048, 00:10:51.713 "data_size": 63488 00:10:51.713 } 00:10:51.713 ] 00:10:51.713 }' 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.713 18:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.971 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.971 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.971 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.971 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.971 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 [2024-11-26 18:57:43.346735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.229 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.230 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.230 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.230 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.230 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.230 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.230 "name": "Existed_Raid", 00:10:52.230 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:52.230 "strip_size_kb": 64, 00:10:52.230 "state": "configuring", 00:10:52.230 "raid_level": "raid0", 00:10:52.230 "superblock": true, 00:10:52.230 "num_base_bdevs": 4, 00:10:52.230 "num_base_bdevs_discovered": 2, 00:10:52.230 "num_base_bdevs_operational": 4, 00:10:52.230 "base_bdevs_list": [ 00:10:52.230 { 00:10:52.230 "name": null, 00:10:52.230 "uuid": "1775dc55-f1e5-479d-8229-002849a80352", 00:10:52.230 "is_configured": false, 00:10:52.230 "data_offset": 0, 00:10:52.230 "data_size": 63488 00:10:52.230 }, 00:10:52.230 { 00:10:52.230 "name": null, 00:10:52.230 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:52.230 "is_configured": false, 00:10:52.230 "data_offset": 0, 00:10:52.230 "data_size": 63488 00:10:52.230 }, 00:10:52.230 { 00:10:52.230 "name": "BaseBdev3", 00:10:52.230 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:52.230 "is_configured": true, 00:10:52.230 "data_offset": 2048, 00:10:52.230 "data_size": 63488 00:10:52.230 }, 00:10:52.230 { 00:10:52.230 "name": "BaseBdev4", 00:10:52.230 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:52.230 "is_configured": true, 00:10:52.230 "data_offset": 2048, 00:10:52.230 "data_size": 63488 00:10:52.230 } 00:10:52.230 ] 00:10:52.230 }' 00:10:52.230 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.230 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.796 [2024-11-26 18:57:43.992089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.796 18:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.796 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.796 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.796 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.796 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.796 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.796 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.796 "name": "Existed_Raid", 00:10:52.796 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:52.796 "strip_size_kb": 64, 00:10:52.796 "state": "configuring", 00:10:52.796 "raid_level": "raid0", 00:10:52.796 "superblock": true, 00:10:52.796 "num_base_bdevs": 4, 00:10:52.796 "num_base_bdevs_discovered": 3, 00:10:52.796 "num_base_bdevs_operational": 4, 00:10:52.796 "base_bdevs_list": [ 00:10:52.796 { 00:10:52.796 "name": null, 00:10:52.796 "uuid": "1775dc55-f1e5-479d-8229-002849a80352", 00:10:52.796 "is_configured": false, 00:10:52.796 "data_offset": 0, 00:10:52.796 "data_size": 63488 00:10:52.796 }, 00:10:52.796 { 00:10:52.796 "name": "BaseBdev2", 00:10:52.796 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:52.796 "is_configured": true, 00:10:52.796 "data_offset": 2048, 00:10:52.796 "data_size": 63488 00:10:52.796 }, 00:10:52.796 { 00:10:52.796 "name": "BaseBdev3", 00:10:52.796 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:52.796 "is_configured": true, 00:10:52.796 "data_offset": 2048, 00:10:52.796 "data_size": 63488 00:10:52.796 }, 00:10:52.796 { 00:10:52.796 "name": "BaseBdev4", 00:10:52.796 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:52.796 "is_configured": true, 00:10:52.796 "data_offset": 2048, 00:10:52.796 "data_size": 63488 00:10:52.796 } 00:10:52.796 ] 00:10:52.796 }' 00:10:52.796 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.796 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1775dc55-f1e5-479d-8229-002849a80352 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.362 [2024-11-26 18:57:44.650620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:53.362 NewBaseBdev 00:10:53.362 [2024-11-26 18:57:44.651120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:53.362 [2024-11-26 18:57:44.651145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.362 [2024-11-26 18:57:44.651495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:53.362 [2024-11-26 18:57:44.651667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:53.362 [2024-11-26 18:57:44.651688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:53.362 [2024-11-26 18:57:44.651844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.362 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.363 [ 00:10:53.363 { 00:10:53.363 "name": "NewBaseBdev", 00:10:53.363 "aliases": [ 00:10:53.363 "1775dc55-f1e5-479d-8229-002849a80352" 00:10:53.363 ], 00:10:53.363 "product_name": "Malloc disk", 00:10:53.363 "block_size": 512, 00:10:53.363 "num_blocks": 65536, 00:10:53.363 "uuid": "1775dc55-f1e5-479d-8229-002849a80352", 00:10:53.363 "assigned_rate_limits": { 00:10:53.363 "rw_ios_per_sec": 0, 00:10:53.363 "rw_mbytes_per_sec": 0, 00:10:53.363 "r_mbytes_per_sec": 0, 00:10:53.363 "w_mbytes_per_sec": 0 00:10:53.363 }, 00:10:53.363 "claimed": true, 00:10:53.363 "claim_type": "exclusive_write", 00:10:53.363 "zoned": false, 00:10:53.363 "supported_io_types": { 00:10:53.363 "read": true, 00:10:53.363 "write": true, 00:10:53.363 "unmap": true, 00:10:53.363 "flush": true, 00:10:53.363 "reset": true, 00:10:53.363 "nvme_admin": false, 00:10:53.363 "nvme_io": false, 00:10:53.363 "nvme_io_md": false, 00:10:53.363 "write_zeroes": true, 00:10:53.363 "zcopy": true, 00:10:53.363 "get_zone_info": false, 00:10:53.363 "zone_management": false, 00:10:53.363 "zone_append": false, 00:10:53.363 "compare": false, 00:10:53.363 "compare_and_write": false, 00:10:53.363 "abort": true, 00:10:53.363 "seek_hole": false, 00:10:53.363 "seek_data": false, 00:10:53.363 "copy": true, 00:10:53.363 "nvme_iov_md": false 00:10:53.363 }, 00:10:53.363 "memory_domains": [ 00:10:53.363 { 00:10:53.363 "dma_device_id": "system", 00:10:53.363 "dma_device_type": 1 00:10:53.363 }, 00:10:53.363 { 00:10:53.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.363 "dma_device_type": 2 00:10:53.363 } 00:10:53.363 ], 00:10:53.363 "driver_specific": {} 00:10:53.363 } 00:10:53.363 ] 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.363 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.620 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.620 "name": "Existed_Raid", 00:10:53.620 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:53.621 "strip_size_kb": 64, 00:10:53.621 "state": "online", 00:10:53.621 "raid_level": "raid0", 00:10:53.621 "superblock": true, 00:10:53.621 "num_base_bdevs": 4, 00:10:53.621 "num_base_bdevs_discovered": 4, 00:10:53.621 "num_base_bdevs_operational": 4, 00:10:53.621 "base_bdevs_list": [ 00:10:53.621 { 00:10:53.621 "name": "NewBaseBdev", 00:10:53.621 "uuid": "1775dc55-f1e5-479d-8229-002849a80352", 00:10:53.621 "is_configured": true, 00:10:53.621 "data_offset": 2048, 00:10:53.621 "data_size": 63488 00:10:53.621 }, 00:10:53.621 { 00:10:53.621 "name": "BaseBdev2", 00:10:53.621 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:53.621 "is_configured": true, 00:10:53.621 "data_offset": 2048, 00:10:53.621 "data_size": 63488 00:10:53.621 }, 00:10:53.621 { 00:10:53.621 "name": "BaseBdev3", 00:10:53.621 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:53.621 "is_configured": true, 00:10:53.621 "data_offset": 2048, 00:10:53.621 "data_size": 63488 00:10:53.621 }, 00:10:53.621 { 00:10:53.621 "name": "BaseBdev4", 00:10:53.621 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:53.621 "is_configured": true, 00:10:53.621 "data_offset": 2048, 00:10:53.621 "data_size": 63488 00:10:53.621 } 00:10:53.621 ] 00:10:53.621 }' 00:10:53.621 18:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.621 18:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.879 [2024-11-26 18:57:45.203351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.879 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.137 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.137 "name": "Existed_Raid", 00:10:54.137 "aliases": [ 00:10:54.137 "4eba1ed9-665c-4841-b164-f0dfc5fcf1da" 00:10:54.137 ], 00:10:54.137 "product_name": "Raid Volume", 00:10:54.137 "block_size": 512, 00:10:54.137 "num_blocks": 253952, 00:10:54.137 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:54.137 "assigned_rate_limits": { 00:10:54.137 "rw_ios_per_sec": 0, 00:10:54.137 "rw_mbytes_per_sec": 0, 00:10:54.137 "r_mbytes_per_sec": 0, 00:10:54.137 "w_mbytes_per_sec": 0 00:10:54.137 }, 00:10:54.137 "claimed": false, 00:10:54.137 "zoned": false, 00:10:54.137 "supported_io_types": { 00:10:54.137 "read": true, 00:10:54.137 "write": true, 00:10:54.137 "unmap": true, 00:10:54.137 "flush": true, 00:10:54.137 "reset": true, 00:10:54.137 "nvme_admin": false, 00:10:54.137 "nvme_io": false, 00:10:54.137 "nvme_io_md": false, 00:10:54.137 "write_zeroes": true, 00:10:54.137 "zcopy": false, 00:10:54.137 "get_zone_info": false, 00:10:54.137 "zone_management": false, 00:10:54.137 "zone_append": false, 00:10:54.137 "compare": false, 00:10:54.137 "compare_and_write": false, 00:10:54.137 "abort": false, 00:10:54.137 "seek_hole": false, 00:10:54.138 "seek_data": false, 00:10:54.138 "copy": false, 00:10:54.138 "nvme_iov_md": false 00:10:54.138 }, 00:10:54.138 "memory_domains": [ 00:10:54.138 { 00:10:54.138 "dma_device_id": "system", 00:10:54.138 "dma_device_type": 1 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.138 "dma_device_type": 2 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "dma_device_id": "system", 00:10:54.138 "dma_device_type": 1 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.138 "dma_device_type": 2 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "dma_device_id": "system", 00:10:54.138 "dma_device_type": 1 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.138 "dma_device_type": 2 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "dma_device_id": "system", 00:10:54.138 "dma_device_type": 1 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.138 "dma_device_type": 2 00:10:54.138 } 00:10:54.138 ], 00:10:54.138 "driver_specific": { 00:10:54.138 "raid": { 00:10:54.138 "uuid": "4eba1ed9-665c-4841-b164-f0dfc5fcf1da", 00:10:54.138 "strip_size_kb": 64, 00:10:54.138 "state": "online", 00:10:54.138 "raid_level": "raid0", 00:10:54.138 "superblock": true, 00:10:54.138 "num_base_bdevs": 4, 00:10:54.138 "num_base_bdevs_discovered": 4, 00:10:54.138 "num_base_bdevs_operational": 4, 00:10:54.138 "base_bdevs_list": [ 00:10:54.138 { 00:10:54.138 "name": "NewBaseBdev", 00:10:54.138 "uuid": "1775dc55-f1e5-479d-8229-002849a80352", 00:10:54.138 "is_configured": true, 00:10:54.138 "data_offset": 2048, 00:10:54.138 "data_size": 63488 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "name": "BaseBdev2", 00:10:54.138 "uuid": "a1587a83-c3b3-442d-9e1e-a833e45582d1", 00:10:54.138 "is_configured": true, 00:10:54.138 "data_offset": 2048, 00:10:54.138 "data_size": 63488 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "name": "BaseBdev3", 00:10:54.138 "uuid": "474efc16-86f3-4036-ba1f-adadf7ba4ddf", 00:10:54.138 "is_configured": true, 00:10:54.138 "data_offset": 2048, 00:10:54.138 "data_size": 63488 00:10:54.138 }, 00:10:54.138 { 00:10:54.138 "name": "BaseBdev4", 00:10:54.138 "uuid": "11f1871f-c683-4cc8-8ba0-75768eda1745", 00:10:54.138 "is_configured": true, 00:10:54.138 "data_offset": 2048, 00:10:54.138 "data_size": 63488 00:10:54.138 } 00:10:54.138 ] 00:10:54.138 } 00:10:54.138 } 00:10:54.138 }' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:54.138 BaseBdev2 00:10:54.138 BaseBdev3 00:10:54.138 BaseBdev4' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.138 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.396 [2024-11-26 18:57:45.570944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.396 [2024-11-26 18:57:45.571220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.396 [2024-11-26 18:57:45.571501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.396 [2024-11-26 18:57:45.571644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.396 [2024-11-26 18:57:45.571918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70181 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70181 ']' 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70181 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70181 00:10:54.396 killing process with pid 70181 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70181' 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70181 00:10:54.396 [2024-11-26 18:57:45.610088] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.396 18:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70181 00:10:54.654 [2024-11-26 18:57:45.967341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.029 18:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:56.029 00:10:56.029 real 0m12.819s 00:10:56.029 user 0m21.243s 00:10:56.029 sys 0m1.774s 00:10:56.029 18:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.029 18:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.029 ************************************ 00:10:56.029 END TEST raid_state_function_test_sb 00:10:56.029 ************************************ 00:10:56.029 18:57:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:56.029 18:57:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:56.029 18:57:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.029 18:57:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.029 ************************************ 00:10:56.029 START TEST raid_superblock_test 00:10:56.029 ************************************ 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70871 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70871 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70871 ']' 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.029 18:57:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.029 [2024-11-26 18:57:47.200063] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:10:56.029 [2024-11-26 18:57:47.200473] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70871 ] 00:10:56.029 [2024-11-26 18:57:47.390048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.287 [2024-11-26 18:57:47.568645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.545 [2024-11-26 18:57:47.810455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.545 [2024-11-26 18:57:47.810547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.113 malloc1 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.113 [2024-11-26 18:57:48.248239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.113 [2024-11-26 18:57:48.248312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.113 [2024-11-26 18:57:48.248346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:57.113 [2024-11-26 18:57:48.248362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.113 [2024-11-26 18:57:48.251194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.113 [2024-11-26 18:57:48.251239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.113 pt1 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.113 malloc2 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.113 [2024-11-26 18:57:48.304084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:57.113 [2024-11-26 18:57:48.304289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.113 [2024-11-26 18:57:48.304370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:57.113 [2024-11-26 18:57:48.304507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.113 [2024-11-26 18:57:48.307495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.113 [2024-11-26 18:57:48.307644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:57.113 pt2 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:57.113 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.114 malloc3 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.114 [2024-11-26 18:57:48.367750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:57.114 [2024-11-26 18:57:48.367821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.114 [2024-11-26 18:57:48.367856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:57.114 [2024-11-26 18:57:48.367872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.114 [2024-11-26 18:57:48.370744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.114 [2024-11-26 18:57:48.370791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:57.114 pt3 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.114 malloc4 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.114 [2024-11-26 18:57:48.423841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:57.114 [2024-11-26 18:57:48.424056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.114 [2024-11-26 18:57:48.424099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:57.114 [2024-11-26 18:57:48.424114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.114 [2024-11-26 18:57:48.426920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.114 [2024-11-26 18:57:48.426962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:57.114 pt4 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.114 [2024-11-26 18:57:48.435862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.114 [2024-11-26 18:57:48.438289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:57.114 [2024-11-26 18:57:48.438537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:57.114 [2024-11-26 18:57:48.438621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:57.114 [2024-11-26 18:57:48.438879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:57.114 [2024-11-26 18:57:48.438919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:57.114 [2024-11-26 18:57:48.439247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:57.114 [2024-11-26 18:57:48.439483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:57.114 [2024-11-26 18:57:48.439506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:57.114 [2024-11-26 18:57:48.439692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.114 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.373 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.373 "name": "raid_bdev1", 00:10:57.373 "uuid": "e845a507-5225-4155-9fc8-35bee1d42ce1", 00:10:57.373 "strip_size_kb": 64, 00:10:57.373 "state": "online", 00:10:57.373 "raid_level": "raid0", 00:10:57.373 "superblock": true, 00:10:57.373 "num_base_bdevs": 4, 00:10:57.373 "num_base_bdevs_discovered": 4, 00:10:57.373 "num_base_bdevs_operational": 4, 00:10:57.373 "base_bdevs_list": [ 00:10:57.373 { 00:10:57.373 "name": "pt1", 00:10:57.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.373 "is_configured": true, 00:10:57.373 "data_offset": 2048, 00:10:57.373 "data_size": 63488 00:10:57.373 }, 00:10:57.373 { 00:10:57.373 "name": "pt2", 00:10:57.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.373 "is_configured": true, 00:10:57.373 "data_offset": 2048, 00:10:57.373 "data_size": 63488 00:10:57.373 }, 00:10:57.373 { 00:10:57.373 "name": "pt3", 00:10:57.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.373 "is_configured": true, 00:10:57.373 "data_offset": 2048, 00:10:57.373 "data_size": 63488 00:10:57.373 }, 00:10:57.373 { 00:10:57.373 "name": "pt4", 00:10:57.373 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.373 "is_configured": true, 00:10:57.373 "data_offset": 2048, 00:10:57.373 "data_size": 63488 00:10:57.373 } 00:10:57.373 ] 00:10:57.373 }' 00:10:57.373 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.373 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.631 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.631 [2024-11-26 18:57:48.976464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.890 18:57:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.890 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.890 "name": "raid_bdev1", 00:10:57.890 "aliases": [ 00:10:57.890 "e845a507-5225-4155-9fc8-35bee1d42ce1" 00:10:57.890 ], 00:10:57.890 "product_name": "Raid Volume", 00:10:57.890 "block_size": 512, 00:10:57.890 "num_blocks": 253952, 00:10:57.890 "uuid": "e845a507-5225-4155-9fc8-35bee1d42ce1", 00:10:57.890 "assigned_rate_limits": { 00:10:57.890 "rw_ios_per_sec": 0, 00:10:57.890 "rw_mbytes_per_sec": 0, 00:10:57.890 "r_mbytes_per_sec": 0, 00:10:57.890 "w_mbytes_per_sec": 0 00:10:57.890 }, 00:10:57.890 "claimed": false, 00:10:57.890 "zoned": false, 00:10:57.890 "supported_io_types": { 00:10:57.890 "read": true, 00:10:57.890 "write": true, 00:10:57.890 "unmap": true, 00:10:57.890 "flush": true, 00:10:57.890 "reset": true, 00:10:57.890 "nvme_admin": false, 00:10:57.890 "nvme_io": false, 00:10:57.890 "nvme_io_md": false, 00:10:57.890 "write_zeroes": true, 00:10:57.890 "zcopy": false, 00:10:57.890 "get_zone_info": false, 00:10:57.890 "zone_management": false, 00:10:57.890 "zone_append": false, 00:10:57.890 "compare": false, 00:10:57.890 "compare_and_write": false, 00:10:57.890 "abort": false, 00:10:57.890 "seek_hole": false, 00:10:57.890 "seek_data": false, 00:10:57.890 "copy": false, 00:10:57.890 "nvme_iov_md": false 00:10:57.890 }, 00:10:57.890 "memory_domains": [ 00:10:57.890 { 00:10:57.890 "dma_device_id": "system", 00:10:57.890 "dma_device_type": 1 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.890 "dma_device_type": 2 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "dma_device_id": "system", 00:10:57.890 "dma_device_type": 1 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.890 "dma_device_type": 2 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "dma_device_id": "system", 00:10:57.890 "dma_device_type": 1 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.890 "dma_device_type": 2 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "dma_device_id": "system", 00:10:57.890 "dma_device_type": 1 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.890 "dma_device_type": 2 00:10:57.890 } 00:10:57.890 ], 00:10:57.890 "driver_specific": { 00:10:57.890 "raid": { 00:10:57.890 "uuid": "e845a507-5225-4155-9fc8-35bee1d42ce1", 00:10:57.890 "strip_size_kb": 64, 00:10:57.890 "state": "online", 00:10:57.890 "raid_level": "raid0", 00:10:57.890 "superblock": true, 00:10:57.890 "num_base_bdevs": 4, 00:10:57.890 "num_base_bdevs_discovered": 4, 00:10:57.890 "num_base_bdevs_operational": 4, 00:10:57.890 "base_bdevs_list": [ 00:10:57.890 { 00:10:57.890 "name": "pt1", 00:10:57.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.890 "is_configured": true, 00:10:57.890 "data_offset": 2048, 00:10:57.890 "data_size": 63488 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "name": "pt2", 00:10:57.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.890 "is_configured": true, 00:10:57.890 "data_offset": 2048, 00:10:57.890 "data_size": 63488 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "name": "pt3", 00:10:57.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.890 "is_configured": true, 00:10:57.890 "data_offset": 2048, 00:10:57.890 "data_size": 63488 00:10:57.890 }, 00:10:57.890 { 00:10:57.890 "name": "pt4", 00:10:57.890 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.890 "is_configured": true, 00:10:57.890 "data_offset": 2048, 00:10:57.890 "data_size": 63488 00:10:57.890 } 00:10:57.890 ] 00:10:57.890 } 00:10:57.890 } 00:10:57.890 }' 00:10:57.890 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.890 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:57.890 pt2 00:10:57.890 pt3 00:10:57.890 pt4' 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.891 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 [2024-11-26 18:57:49.344519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e845a507-5225-4155-9fc8-35bee1d42ce1 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e845a507-5225-4155-9fc8-35bee1d42ce1 ']' 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 [2024-11-26 18:57:49.396156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.150 [2024-11-26 18:57:49.396309] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.150 [2024-11-26 18:57:49.396434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.150 [2024-11-26 18:57:49.396525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.150 [2024-11-26 18:57:49.396547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.150 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.410 [2024-11-26 18:57:49.548213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:58.410 [2024-11-26 18:57:49.550874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:58.410 [2024-11-26 18:57:49.551083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:58.410 [2024-11-26 18:57:49.551188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:58.410 [2024-11-26 18:57:49.551381] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:58.410 [2024-11-26 18:57:49.551636] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:58.410 [2024-11-26 18:57:49.551808] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:58.410 [2024-11-26 18:57:49.551984] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:58.410 [2024-11-26 18:57:49.552185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.410 [2024-11-26 18:57:49.552212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:58.410 request: 00:10:58.410 { 00:10:58.410 "name": "raid_bdev1", 00:10:58.410 "raid_level": "raid0", 00:10:58.410 "base_bdevs": [ 00:10:58.410 "malloc1", 00:10:58.410 "malloc2", 00:10:58.410 "malloc3", 00:10:58.410 "malloc4" 00:10:58.410 ], 00:10:58.410 "strip_size_kb": 64, 00:10:58.410 "superblock": false, 00:10:58.410 "method": "bdev_raid_create", 00:10:58.410 "req_id": 1 00:10:58.410 } 00:10:58.410 Got JSON-RPC error response 00:10:58.410 response: 00:10:58.410 { 00:10:58.410 "code": -17, 00:10:58.410 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:58.410 } 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.410 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.411 [2024-11-26 18:57:49.612448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:58.411 [2024-11-26 18:57:49.612636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.411 [2024-11-26 18:57:49.612675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.411 [2024-11-26 18:57:49.612693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.411 [2024-11-26 18:57:49.615620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.411 [2024-11-26 18:57:49.615780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:58.411 [2024-11-26 18:57:49.615891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:58.411 [2024-11-26 18:57:49.615981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:58.411 pt1 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.411 "name": "raid_bdev1", 00:10:58.411 "uuid": "e845a507-5225-4155-9fc8-35bee1d42ce1", 00:10:58.411 "strip_size_kb": 64, 00:10:58.411 "state": "configuring", 00:10:58.411 "raid_level": "raid0", 00:10:58.411 "superblock": true, 00:10:58.411 "num_base_bdevs": 4, 00:10:58.411 "num_base_bdevs_discovered": 1, 00:10:58.411 "num_base_bdevs_operational": 4, 00:10:58.411 "base_bdevs_list": [ 00:10:58.411 { 00:10:58.411 "name": "pt1", 00:10:58.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.411 "is_configured": true, 00:10:58.411 "data_offset": 2048, 00:10:58.411 "data_size": 63488 00:10:58.411 }, 00:10:58.411 { 00:10:58.411 "name": null, 00:10:58.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.411 "is_configured": false, 00:10:58.411 "data_offset": 2048, 00:10:58.411 "data_size": 63488 00:10:58.411 }, 00:10:58.411 { 00:10:58.411 "name": null, 00:10:58.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.411 "is_configured": false, 00:10:58.411 "data_offset": 2048, 00:10:58.411 "data_size": 63488 00:10:58.411 }, 00:10:58.411 { 00:10:58.411 "name": null, 00:10:58.411 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.411 "is_configured": false, 00:10:58.411 "data_offset": 2048, 00:10:58.411 "data_size": 63488 00:10:58.411 } 00:10:58.411 ] 00:10:58.411 }' 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.411 18:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 [2024-11-26 18:57:50.144721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.978 [2024-11-26 18:57:50.144971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.978 [2024-11-26 18:57:50.145012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:58.978 [2024-11-26 18:57:50.145031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.978 [2024-11-26 18:57:50.145616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.978 [2024-11-26 18:57:50.145656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.978 [2024-11-26 18:57:50.145763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.978 [2024-11-26 18:57:50.145800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.978 pt2 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 [2024-11-26 18:57:50.152682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.978 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.978 "name": "raid_bdev1", 00:10:58.978 "uuid": "e845a507-5225-4155-9fc8-35bee1d42ce1", 00:10:58.978 "strip_size_kb": 64, 00:10:58.978 "state": "configuring", 00:10:58.978 "raid_level": "raid0", 00:10:58.978 "superblock": true, 00:10:58.978 "num_base_bdevs": 4, 00:10:58.978 "num_base_bdevs_discovered": 1, 00:10:58.978 "num_base_bdevs_operational": 4, 00:10:58.978 "base_bdevs_list": [ 00:10:58.978 { 00:10:58.978 "name": "pt1", 00:10:58.978 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.978 "is_configured": true, 00:10:58.978 "data_offset": 2048, 00:10:58.978 "data_size": 63488 00:10:58.978 }, 00:10:58.978 { 00:10:58.978 "name": null, 00:10:58.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.978 "is_configured": false, 00:10:58.978 "data_offset": 0, 00:10:58.978 "data_size": 63488 00:10:58.978 }, 00:10:58.978 { 00:10:58.978 "name": null, 00:10:58.978 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.978 "is_configured": false, 00:10:58.978 "data_offset": 2048, 00:10:58.978 "data_size": 63488 00:10:58.978 }, 00:10:58.978 { 00:10:58.978 "name": null, 00:10:58.978 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.978 "is_configured": false, 00:10:58.978 "data_offset": 2048, 00:10:58.978 "data_size": 63488 00:10:58.979 } 00:10:58.979 ] 00:10:58.979 }' 00:10:58.979 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.979 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.546 [2024-11-26 18:57:50.656844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.546 [2024-11-26 18:57:50.656942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.546 [2024-11-26 18:57:50.656983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:59.546 [2024-11-26 18:57:50.657004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.546 [2024-11-26 18:57:50.657731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.546 [2024-11-26 18:57:50.657762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.546 [2024-11-26 18:57:50.658083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:59.546 [2024-11-26 18:57:50.658173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.546 pt2 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.546 [2024-11-26 18:57:50.668806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:59.546 [2024-11-26 18:57:50.669002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.546 [2024-11-26 18:57:50.669040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:59.546 [2024-11-26 18:57:50.669055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.546 [2024-11-26 18:57:50.669511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.546 [2024-11-26 18:57:50.669536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:59.546 [2024-11-26 18:57:50.669618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:59.546 [2024-11-26 18:57:50.669655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:59.546 pt3 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.546 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.546 [2024-11-26 18:57:50.680774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:59.546 [2024-11-26 18:57:50.680826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.546 [2024-11-26 18:57:50.680852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:59.546 [2024-11-26 18:57:50.680866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.546 [2024-11-26 18:57:50.681344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.546 [2024-11-26 18:57:50.681377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:59.546 [2024-11-26 18:57:50.681458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:59.546 [2024-11-26 18:57:50.681490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:59.546 [2024-11-26 18:57:50.681680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:59.546 [2024-11-26 18:57:50.681703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.546 [2024-11-26 18:57:50.682017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:59.546 [2024-11-26 18:57:50.682217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:59.547 [2024-11-26 18:57:50.682240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:59.547 [2024-11-26 18:57:50.682397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.547 pt4 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.547 "name": "raid_bdev1", 00:10:59.547 "uuid": "e845a507-5225-4155-9fc8-35bee1d42ce1", 00:10:59.547 "strip_size_kb": 64, 00:10:59.547 "state": "online", 00:10:59.547 "raid_level": "raid0", 00:10:59.547 "superblock": true, 00:10:59.547 "num_base_bdevs": 4, 00:10:59.547 "num_base_bdevs_discovered": 4, 00:10:59.547 "num_base_bdevs_operational": 4, 00:10:59.547 "base_bdevs_list": [ 00:10:59.547 { 00:10:59.547 "name": "pt1", 00:10:59.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.547 "is_configured": true, 00:10:59.547 "data_offset": 2048, 00:10:59.547 "data_size": 63488 00:10:59.547 }, 00:10:59.547 { 00:10:59.547 "name": "pt2", 00:10:59.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.547 "is_configured": true, 00:10:59.547 "data_offset": 2048, 00:10:59.547 "data_size": 63488 00:10:59.547 }, 00:10:59.547 { 00:10:59.547 "name": "pt3", 00:10:59.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.547 "is_configured": true, 00:10:59.547 "data_offset": 2048, 00:10:59.547 "data_size": 63488 00:10:59.547 }, 00:10:59.547 { 00:10:59.547 "name": "pt4", 00:10:59.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.547 "is_configured": true, 00:10:59.547 "data_offset": 2048, 00:10:59.547 "data_size": 63488 00:10:59.547 } 00:10:59.547 ] 00:10:59.547 }' 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.547 18:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.168 [2024-11-26 18:57:51.213452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.168 "name": "raid_bdev1", 00:11:00.168 "aliases": [ 00:11:00.168 "e845a507-5225-4155-9fc8-35bee1d42ce1" 00:11:00.168 ], 00:11:00.168 "product_name": "Raid Volume", 00:11:00.168 "block_size": 512, 00:11:00.168 "num_blocks": 253952, 00:11:00.168 "uuid": "e845a507-5225-4155-9fc8-35bee1d42ce1", 00:11:00.168 "assigned_rate_limits": { 00:11:00.168 "rw_ios_per_sec": 0, 00:11:00.168 "rw_mbytes_per_sec": 0, 00:11:00.168 "r_mbytes_per_sec": 0, 00:11:00.168 "w_mbytes_per_sec": 0 00:11:00.168 }, 00:11:00.168 "claimed": false, 00:11:00.168 "zoned": false, 00:11:00.168 "supported_io_types": { 00:11:00.168 "read": true, 00:11:00.168 "write": true, 00:11:00.168 "unmap": true, 00:11:00.168 "flush": true, 00:11:00.168 "reset": true, 00:11:00.168 "nvme_admin": false, 00:11:00.168 "nvme_io": false, 00:11:00.168 "nvme_io_md": false, 00:11:00.168 "write_zeroes": true, 00:11:00.168 "zcopy": false, 00:11:00.168 "get_zone_info": false, 00:11:00.168 "zone_management": false, 00:11:00.168 "zone_append": false, 00:11:00.168 "compare": false, 00:11:00.168 "compare_and_write": false, 00:11:00.168 "abort": false, 00:11:00.168 "seek_hole": false, 00:11:00.168 "seek_data": false, 00:11:00.168 "copy": false, 00:11:00.168 "nvme_iov_md": false 00:11:00.168 }, 00:11:00.168 "memory_domains": [ 00:11:00.168 { 00:11:00.168 "dma_device_id": "system", 00:11:00.168 "dma_device_type": 1 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.168 "dma_device_type": 2 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "dma_device_id": "system", 00:11:00.168 "dma_device_type": 1 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.168 "dma_device_type": 2 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "dma_device_id": "system", 00:11:00.168 "dma_device_type": 1 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.168 "dma_device_type": 2 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "dma_device_id": "system", 00:11:00.168 "dma_device_type": 1 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.168 "dma_device_type": 2 00:11:00.168 } 00:11:00.168 ], 00:11:00.168 "driver_specific": { 00:11:00.168 "raid": { 00:11:00.168 "uuid": "e845a507-5225-4155-9fc8-35bee1d42ce1", 00:11:00.168 "strip_size_kb": 64, 00:11:00.168 "state": "online", 00:11:00.168 "raid_level": "raid0", 00:11:00.168 "superblock": true, 00:11:00.168 "num_base_bdevs": 4, 00:11:00.168 "num_base_bdevs_discovered": 4, 00:11:00.168 "num_base_bdevs_operational": 4, 00:11:00.168 "base_bdevs_list": [ 00:11:00.168 { 00:11:00.168 "name": "pt1", 00:11:00.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.168 "is_configured": true, 00:11:00.168 "data_offset": 2048, 00:11:00.168 "data_size": 63488 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "name": "pt2", 00:11:00.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.168 "is_configured": true, 00:11:00.168 "data_offset": 2048, 00:11:00.168 "data_size": 63488 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "name": "pt3", 00:11:00.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.168 "is_configured": true, 00:11:00.168 "data_offset": 2048, 00:11:00.168 "data_size": 63488 00:11:00.168 }, 00:11:00.168 { 00:11:00.168 "name": "pt4", 00:11:00.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.168 "is_configured": true, 00:11:00.168 "data_offset": 2048, 00:11:00.168 "data_size": 63488 00:11:00.168 } 00:11:00.168 ] 00:11:00.168 } 00:11:00.168 } 00:11:00.168 }' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:00.168 pt2 00:11:00.168 pt3 00:11:00.168 pt4' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.168 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.169 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.169 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.169 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.169 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:00.169 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.169 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.169 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:00.427 [2024-11-26 18:57:51.585484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e845a507-5225-4155-9fc8-35bee1d42ce1 '!=' e845a507-5225-4155-9fc8-35bee1d42ce1 ']' 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70871 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70871 ']' 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70871 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70871 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70871' 00:11:00.427 killing process with pid 70871 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70871 00:11:00.427 [2024-11-26 18:57:51.661841] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.427 18:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70871 00:11:00.427 [2024-11-26 18:57:51.661966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.427 [2024-11-26 18:57:51.662066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.427 [2024-11-26 18:57:51.662082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:00.685 [2024-11-26 18:57:52.028293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.063 ************************************ 00:11:02.063 END TEST raid_superblock_test 00:11:02.063 ************************************ 00:11:02.063 18:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:02.063 00:11:02.063 real 0m6.020s 00:11:02.063 user 0m9.053s 00:11:02.063 sys 0m0.871s 00:11:02.063 18:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.063 18:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.063 18:57:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:02.063 18:57:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:02.063 18:57:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.063 18:57:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.063 ************************************ 00:11:02.063 START TEST raid_read_error_test 00:11:02.063 ************************************ 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:02.063 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2ub1yzttHG 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71138 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71138 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71138 ']' 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.064 18:57:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.064 [2024-11-26 18:57:53.292346] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:11:02.064 [2024-11-26 18:57:53.292772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71138 ] 00:11:02.322 [2024-11-26 18:57:53.483259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.322 [2024-11-26 18:57:53.645334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.582 [2024-11-26 18:57:53.857703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.582 [2024-11-26 18:57:53.857747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 BaseBdev1_malloc 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 true 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 [2024-11-26 18:57:54.389886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:03.150 [2024-11-26 18:57:54.389982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.150 [2024-11-26 18:57:54.390013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:03.150 [2024-11-26 18:57:54.390032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.150 [2024-11-26 18:57:54.393150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.150 [2024-11-26 18:57:54.393229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:03.150 BaseBdev1 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 BaseBdev2_malloc 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 true 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.150 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:03.151 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.151 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.151 [2024-11-26 18:57:54.452085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:03.151 [2024-11-26 18:57:54.452314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.151 [2024-11-26 18:57:54.452350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:03.151 [2024-11-26 18:57:54.452370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.151 [2024-11-26 18:57:54.455331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.151 [2024-11-26 18:57:54.455507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:03.151 BaseBdev2 00:11:03.151 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.151 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.151 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:03.151 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.151 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 BaseBdev3_malloc 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 true 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 [2024-11-26 18:57:54.528721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:03.410 [2024-11-26 18:57:54.528799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.410 [2024-11-26 18:57:54.528825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:03.410 [2024-11-26 18:57:54.528841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.410 [2024-11-26 18:57:54.531884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.410 [2024-11-26 18:57:54.531969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:03.410 BaseBdev3 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 BaseBdev4_malloc 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 true 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 [2024-11-26 18:57:54.590263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:03.410 [2024-11-26 18:57:54.590360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.410 [2024-11-26 18:57:54.590401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:03.410 [2024-11-26 18:57:54.590418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.410 [2024-11-26 18:57:54.593468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.410 [2024-11-26 18:57:54.593715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:03.410 BaseBdev4 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 [2024-11-26 18:57:54.598404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.410 [2024-11-26 18:57:54.601107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.410 [2024-11-26 18:57:54.601240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.410 [2024-11-26 18:57:54.601348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.410 [2024-11-26 18:57:54.601678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:03.410 [2024-11-26 18:57:54.601703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.410 [2024-11-26 18:57:54.602076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:03.410 [2024-11-26 18:57:54.602305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:03.410 [2024-11-26 18:57:54.602329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:03.410 [2024-11-26 18:57:54.602566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.410 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.411 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.411 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.411 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.411 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.411 "name": "raid_bdev1", 00:11:03.411 "uuid": "97c26b60-1b21-46d1-9f8a-68419e0ba60c", 00:11:03.411 "strip_size_kb": 64, 00:11:03.411 "state": "online", 00:11:03.411 "raid_level": "raid0", 00:11:03.411 "superblock": true, 00:11:03.411 "num_base_bdevs": 4, 00:11:03.411 "num_base_bdevs_discovered": 4, 00:11:03.411 "num_base_bdevs_operational": 4, 00:11:03.411 "base_bdevs_list": [ 00:11:03.411 { 00:11:03.411 "name": "BaseBdev1", 00:11:03.411 "uuid": "8b2d20eb-71d2-51ed-9556-8ae8feb23849", 00:11:03.411 "is_configured": true, 00:11:03.411 "data_offset": 2048, 00:11:03.411 "data_size": 63488 00:11:03.411 }, 00:11:03.411 { 00:11:03.411 "name": "BaseBdev2", 00:11:03.411 "uuid": "26526dcc-2890-55db-b3ba-b5b3facf42c6", 00:11:03.411 "is_configured": true, 00:11:03.411 "data_offset": 2048, 00:11:03.411 "data_size": 63488 00:11:03.411 }, 00:11:03.411 { 00:11:03.411 "name": "BaseBdev3", 00:11:03.411 "uuid": "655631e0-fca0-5665-8293-ef42d82bd7b0", 00:11:03.411 "is_configured": true, 00:11:03.411 "data_offset": 2048, 00:11:03.411 "data_size": 63488 00:11:03.411 }, 00:11:03.411 { 00:11:03.411 "name": "BaseBdev4", 00:11:03.411 "uuid": "4a48d768-157b-5699-bc54-5efe825b13df", 00:11:03.411 "is_configured": true, 00:11:03.411 "data_offset": 2048, 00:11:03.411 "data_size": 63488 00:11:03.411 } 00:11:03.411 ] 00:11:03.411 }' 00:11:03.411 18:57:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.411 18:57:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.976 18:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:03.976 18:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:03.976 [2024-11-26 18:57:55.232233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.928 "name": "raid_bdev1", 00:11:04.928 "uuid": "97c26b60-1b21-46d1-9f8a-68419e0ba60c", 00:11:04.928 "strip_size_kb": 64, 00:11:04.928 "state": "online", 00:11:04.928 "raid_level": "raid0", 00:11:04.928 "superblock": true, 00:11:04.928 "num_base_bdevs": 4, 00:11:04.928 "num_base_bdevs_discovered": 4, 00:11:04.928 "num_base_bdevs_operational": 4, 00:11:04.928 "base_bdevs_list": [ 00:11:04.928 { 00:11:04.928 "name": "BaseBdev1", 00:11:04.928 "uuid": "8b2d20eb-71d2-51ed-9556-8ae8feb23849", 00:11:04.928 "is_configured": true, 00:11:04.928 "data_offset": 2048, 00:11:04.928 "data_size": 63488 00:11:04.928 }, 00:11:04.928 { 00:11:04.928 "name": "BaseBdev2", 00:11:04.928 "uuid": "26526dcc-2890-55db-b3ba-b5b3facf42c6", 00:11:04.928 "is_configured": true, 00:11:04.928 "data_offset": 2048, 00:11:04.928 "data_size": 63488 00:11:04.928 }, 00:11:04.928 { 00:11:04.928 "name": "BaseBdev3", 00:11:04.928 "uuid": "655631e0-fca0-5665-8293-ef42d82bd7b0", 00:11:04.928 "is_configured": true, 00:11:04.928 "data_offset": 2048, 00:11:04.928 "data_size": 63488 00:11:04.928 }, 00:11:04.928 { 00:11:04.928 "name": "BaseBdev4", 00:11:04.928 "uuid": "4a48d768-157b-5699-bc54-5efe825b13df", 00:11:04.928 "is_configured": true, 00:11:04.928 "data_offset": 2048, 00:11:04.928 "data_size": 63488 00:11:04.928 } 00:11:04.928 ] 00:11:04.928 }' 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.928 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.495 [2024-11-26 18:57:56.684913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.495 [2024-11-26 18:57:56.684973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.495 [2024-11-26 18:57:56.688460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.495 [2024-11-26 18:57:56.688726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.495 [2024-11-26 18:57:56.688805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.495 [2024-11-26 18:57:56.688826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:05.495 { 00:11:05.495 "results": [ 00:11:05.495 { 00:11:05.495 "job": "raid_bdev1", 00:11:05.495 "core_mask": "0x1", 00:11:05.495 "workload": "randrw", 00:11:05.495 "percentage": 50, 00:11:05.495 "status": "finished", 00:11:05.495 "queue_depth": 1, 00:11:05.495 "io_size": 131072, 00:11:05.495 "runtime": 1.449825, 00:11:05.495 "iops": 10228.130981325332, 00:11:05.495 "mibps": 1278.5163726656665, 00:11:05.495 "io_failed": 1, 00:11:05.495 "io_timeout": 0, 00:11:05.495 "avg_latency_us": 136.38147734935328, 00:11:05.495 "min_latency_us": 38.167272727272724, 00:11:05.495 "max_latency_us": 1846.9236363636364 00:11:05.495 } 00:11:05.495 ], 00:11:05.495 "core_count": 1 00:11:05.495 } 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71138 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71138 ']' 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71138 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71138 00:11:05.495 killing process with pid 71138 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71138' 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71138 00:11:05.495 [2024-11-26 18:57:56.720448] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.495 18:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71138 00:11:05.754 [2024-11-26 18:57:57.029452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2ub1yzttHG 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:11:07.129 00:11:07.129 real 0m4.965s 00:11:07.129 user 0m6.201s 00:11:07.129 sys 0m0.595s 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.129 ************************************ 00:11:07.129 END TEST raid_read_error_test 00:11:07.129 ************************************ 00:11:07.129 18:57:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.129 18:57:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:07.129 18:57:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.129 18:57:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.129 18:57:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.129 ************************************ 00:11:07.129 START TEST raid_write_error_test 00:11:07.129 ************************************ 00:11:07.129 18:57:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:07.129 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:07.129 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.O9GJSQbFB0 00:11:07.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71288 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71288 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71288 ']' 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.130 18:57:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.130 [2024-11-26 18:57:58.315862] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:11:07.130 [2024-11-26 18:57:58.316296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71288 ] 00:11:07.388 [2024-11-26 18:57:58.507164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.388 [2024-11-26 18:57:58.671209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.645 [2024-11-26 18:57:58.912252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.645 [2024-11-26 18:57:58.912592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.211 BaseBdev1_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.211 true 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.211 [2024-11-26 18:57:59.425431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:08.211 [2024-11-26 18:57:59.425510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.211 [2024-11-26 18:57:59.425543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:08.211 [2024-11-26 18:57:59.425562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.211 [2024-11-26 18:57:59.428512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.211 [2024-11-26 18:57:59.428572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.211 BaseBdev1 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.211 BaseBdev2_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.211 true 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.211 [2024-11-26 18:57:59.482594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.211 [2024-11-26 18:57:59.482665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.211 [2024-11-26 18:57:59.482691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.211 [2024-11-26 18:57:59.482708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.211 [2024-11-26 18:57:59.485535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.211 [2024-11-26 18:57:59.485584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.211 BaseBdev2 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.211 BaseBdev3_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.211 true 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.211 [2024-11-26 18:57:59.550796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:08.211 [2024-11-26 18:57:59.551022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.211 [2024-11-26 18:57:59.551060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:08.211 [2024-11-26 18:57:59.551079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.211 [2024-11-26 18:57:59.554007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.211 [2024-11-26 18:57:59.554165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:08.211 BaseBdev3 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.211 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.469 BaseBdev4_malloc 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.469 true 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.469 [2024-11-26 18:57:59.607158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:08.469 [2024-11-26 18:57:59.607224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.469 [2024-11-26 18:57:59.607251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:08.469 [2024-11-26 18:57:59.607269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.469 [2024-11-26 18:57:59.610180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.469 [2024-11-26 18:57:59.610235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:08.469 BaseBdev4 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.469 [2024-11-26 18:57:59.615273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.469 [2024-11-26 18:57:59.617835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.469 [2024-11-26 18:57:59.617984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.469 [2024-11-26 18:57:59.618086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:08.469 [2024-11-26 18:57:59.618389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:08.469 [2024-11-26 18:57:59.618415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.469 [2024-11-26 18:57:59.618758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:08.469 [2024-11-26 18:57:59.618994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:08.469 [2024-11-26 18:57:59.619015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:08.469 [2024-11-26 18:57:59.619303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.469 "name": "raid_bdev1", 00:11:08.469 "uuid": "703bc70a-6d92-483b-8731-f362570d3a8b", 00:11:08.469 "strip_size_kb": 64, 00:11:08.469 "state": "online", 00:11:08.469 "raid_level": "raid0", 00:11:08.469 "superblock": true, 00:11:08.469 "num_base_bdevs": 4, 00:11:08.469 "num_base_bdevs_discovered": 4, 00:11:08.469 "num_base_bdevs_operational": 4, 00:11:08.469 "base_bdevs_list": [ 00:11:08.469 { 00:11:08.469 "name": "BaseBdev1", 00:11:08.469 "uuid": "bf14631d-ec2b-5530-9ad0-838d2a87e5de", 00:11:08.469 "is_configured": true, 00:11:08.469 "data_offset": 2048, 00:11:08.469 "data_size": 63488 00:11:08.469 }, 00:11:08.469 { 00:11:08.469 "name": "BaseBdev2", 00:11:08.469 "uuid": "43f46231-76f9-53bd-848c-1f96af1a4ef5", 00:11:08.469 "is_configured": true, 00:11:08.469 "data_offset": 2048, 00:11:08.469 "data_size": 63488 00:11:08.469 }, 00:11:08.469 { 00:11:08.469 "name": "BaseBdev3", 00:11:08.469 "uuid": "0169fcd6-5dca-58db-b030-aedb87c79cfa", 00:11:08.469 "is_configured": true, 00:11:08.469 "data_offset": 2048, 00:11:08.469 "data_size": 63488 00:11:08.469 }, 00:11:08.469 { 00:11:08.469 "name": "BaseBdev4", 00:11:08.469 "uuid": "270e5d11-4624-5f3e-95d9-746607402fb6", 00:11:08.469 "is_configured": true, 00:11:08.469 "data_offset": 2048, 00:11:08.469 "data_size": 63488 00:11:08.469 } 00:11:08.469 ] 00:11:08.469 }' 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.469 18:57:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.036 18:58:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:09.036 18:58:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:09.036 [2024-11-26 18:58:00.300959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:09.991 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:09.991 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.991 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.991 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.991 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:09.991 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:09.991 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:09.991 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.992 "name": "raid_bdev1", 00:11:09.992 "uuid": "703bc70a-6d92-483b-8731-f362570d3a8b", 00:11:09.992 "strip_size_kb": 64, 00:11:09.992 "state": "online", 00:11:09.992 "raid_level": "raid0", 00:11:09.992 "superblock": true, 00:11:09.992 "num_base_bdevs": 4, 00:11:09.992 "num_base_bdevs_discovered": 4, 00:11:09.992 "num_base_bdevs_operational": 4, 00:11:09.992 "base_bdevs_list": [ 00:11:09.992 { 00:11:09.992 "name": "BaseBdev1", 00:11:09.992 "uuid": "bf14631d-ec2b-5530-9ad0-838d2a87e5de", 00:11:09.992 "is_configured": true, 00:11:09.992 "data_offset": 2048, 00:11:09.992 "data_size": 63488 00:11:09.992 }, 00:11:09.992 { 00:11:09.992 "name": "BaseBdev2", 00:11:09.992 "uuid": "43f46231-76f9-53bd-848c-1f96af1a4ef5", 00:11:09.992 "is_configured": true, 00:11:09.992 "data_offset": 2048, 00:11:09.992 "data_size": 63488 00:11:09.992 }, 00:11:09.992 { 00:11:09.992 "name": "BaseBdev3", 00:11:09.992 "uuid": "0169fcd6-5dca-58db-b030-aedb87c79cfa", 00:11:09.992 "is_configured": true, 00:11:09.992 "data_offset": 2048, 00:11:09.992 "data_size": 63488 00:11:09.992 }, 00:11:09.992 { 00:11:09.992 "name": "BaseBdev4", 00:11:09.992 "uuid": "270e5d11-4624-5f3e-95d9-746607402fb6", 00:11:09.992 "is_configured": true, 00:11:09.992 "data_offset": 2048, 00:11:09.992 "data_size": 63488 00:11:09.992 } 00:11:09.992 ] 00:11:09.992 }' 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.992 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.560 [2024-11-26 18:58:01.679020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.560 [2024-11-26 18:58:01.679061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.560 [2024-11-26 18:58:01.682418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.560 [2024-11-26 18:58:01.682497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.560 [2024-11-26 18:58:01.682561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.560 [2024-11-26 18:58:01.682581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:10.560 { 00:11:10.560 "results": [ 00:11:10.560 { 00:11:10.560 "job": "raid_bdev1", 00:11:10.560 "core_mask": "0x1", 00:11:10.560 "workload": "randrw", 00:11:10.560 "percentage": 50, 00:11:10.560 "status": "finished", 00:11:10.560 "queue_depth": 1, 00:11:10.560 "io_size": 131072, 00:11:10.560 "runtime": 1.375368, 00:11:10.560 "iops": 9747.209474118927, 00:11:10.560 "mibps": 1218.4011842648658, 00:11:10.560 "io_failed": 1, 00:11:10.560 "io_timeout": 0, 00:11:10.560 "avg_latency_us": 143.2840646338073, 00:11:10.560 "min_latency_us": 39.09818181818182, 00:11:10.560 "max_latency_us": 1839.4763636363637 00:11:10.560 } 00:11:10.560 ], 00:11:10.560 "core_count": 1 00:11:10.560 } 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71288 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71288 ']' 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71288 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71288 00:11:10.560 killing process with pid 71288 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71288' 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71288 00:11:10.560 [2024-11-26 18:58:01.718964] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.560 18:58:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71288 00:11:10.818 [2024-11-26 18:58:02.022208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.O9GJSQbFB0 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.194 ************************************ 00:11:12.194 END TEST raid_write_error_test 00:11:12.194 ************************************ 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:12.194 00:11:12.194 real 0m4.991s 00:11:12.194 user 0m6.133s 00:11:12.194 sys 0m0.635s 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.194 18:58:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.194 18:58:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:12.194 18:58:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:12.194 18:58:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.194 18:58:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.194 18:58:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.194 ************************************ 00:11:12.194 START TEST raid_state_function_test 00:11:12.194 ************************************ 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71434 00:11:12.194 Process raid pid: 71434 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71434' 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71434 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71434 ']' 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.194 18:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.194 [2024-11-26 18:58:03.364848] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:11:12.194 [2024-11-26 18:58:03.365064] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.194 [2024-11-26 18:58:03.557321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.453 [2024-11-26 18:58:03.699436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.712 [2024-11-26 18:58:03.918994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.712 [2024-11-26 18:58:03.919051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.280 [2024-11-26 18:58:04.431770] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.280 [2024-11-26 18:58:04.432050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.280 [2024-11-26 18:58:04.432081] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.280 [2024-11-26 18:58:04.432101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.280 [2024-11-26 18:58:04.432111] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.280 [2024-11-26 18:58:04.432127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.280 [2024-11-26 18:58:04.432137] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.280 [2024-11-26 18:58:04.432152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.280 "name": "Existed_Raid", 00:11:13.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.280 "strip_size_kb": 64, 00:11:13.280 "state": "configuring", 00:11:13.280 "raid_level": "concat", 00:11:13.280 "superblock": false, 00:11:13.280 "num_base_bdevs": 4, 00:11:13.280 "num_base_bdevs_discovered": 0, 00:11:13.280 "num_base_bdevs_operational": 4, 00:11:13.280 "base_bdevs_list": [ 00:11:13.280 { 00:11:13.280 "name": "BaseBdev1", 00:11:13.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.280 "is_configured": false, 00:11:13.280 "data_offset": 0, 00:11:13.280 "data_size": 0 00:11:13.280 }, 00:11:13.280 { 00:11:13.280 "name": "BaseBdev2", 00:11:13.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.280 "is_configured": false, 00:11:13.280 "data_offset": 0, 00:11:13.280 "data_size": 0 00:11:13.280 }, 00:11:13.280 { 00:11:13.280 "name": "BaseBdev3", 00:11:13.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.280 "is_configured": false, 00:11:13.280 "data_offset": 0, 00:11:13.280 "data_size": 0 00:11:13.280 }, 00:11:13.280 { 00:11:13.280 "name": "BaseBdev4", 00:11:13.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.280 "is_configured": false, 00:11:13.280 "data_offset": 0, 00:11:13.280 "data_size": 0 00:11:13.280 } 00:11:13.280 ] 00:11:13.280 }' 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.280 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.847 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.847 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.847 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.847 [2024-11-26 18:58:04.987864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.847 [2024-11-26 18:58:04.988061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:13.847 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.847 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.847 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.847 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.847 [2024-11-26 18:58:04.995812] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.847 [2024-11-26 18:58:04.996005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.847 [2024-11-26 18:58:04.996033] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.847 [2024-11-26 18:58:04.996052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.847 [2024-11-26 18:58:04.996062] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.847 [2024-11-26 18:58:04.996077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.847 [2024-11-26 18:58:04.996087] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.847 [2024-11-26 18:58:04.996101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.847 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.848 18:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.848 18:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.848 [2024-11-26 18:58:05.042070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.848 BaseBdev1 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.848 [ 00:11:13.848 { 00:11:13.848 "name": "BaseBdev1", 00:11:13.848 "aliases": [ 00:11:13.848 "05caef54-bcfd-404e-9442-a9699bdaa691" 00:11:13.848 ], 00:11:13.848 "product_name": "Malloc disk", 00:11:13.848 "block_size": 512, 00:11:13.848 "num_blocks": 65536, 00:11:13.848 "uuid": "05caef54-bcfd-404e-9442-a9699bdaa691", 00:11:13.848 "assigned_rate_limits": { 00:11:13.848 "rw_ios_per_sec": 0, 00:11:13.848 "rw_mbytes_per_sec": 0, 00:11:13.848 "r_mbytes_per_sec": 0, 00:11:13.848 "w_mbytes_per_sec": 0 00:11:13.848 }, 00:11:13.848 "claimed": true, 00:11:13.848 "claim_type": "exclusive_write", 00:11:13.848 "zoned": false, 00:11:13.848 "supported_io_types": { 00:11:13.848 "read": true, 00:11:13.848 "write": true, 00:11:13.848 "unmap": true, 00:11:13.848 "flush": true, 00:11:13.848 "reset": true, 00:11:13.848 "nvme_admin": false, 00:11:13.848 "nvme_io": false, 00:11:13.848 "nvme_io_md": false, 00:11:13.848 "write_zeroes": true, 00:11:13.848 "zcopy": true, 00:11:13.848 "get_zone_info": false, 00:11:13.848 "zone_management": false, 00:11:13.848 "zone_append": false, 00:11:13.848 "compare": false, 00:11:13.848 "compare_and_write": false, 00:11:13.848 "abort": true, 00:11:13.848 "seek_hole": false, 00:11:13.848 "seek_data": false, 00:11:13.848 "copy": true, 00:11:13.848 "nvme_iov_md": false 00:11:13.848 }, 00:11:13.848 "memory_domains": [ 00:11:13.848 { 00:11:13.848 "dma_device_id": "system", 00:11:13.848 "dma_device_type": 1 00:11:13.848 }, 00:11:13.848 { 00:11:13.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.848 "dma_device_type": 2 00:11:13.848 } 00:11:13.848 ], 00:11:13.848 "driver_specific": {} 00:11:13.848 } 00:11:13.848 ] 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.848 "name": "Existed_Raid", 00:11:13.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.848 "strip_size_kb": 64, 00:11:13.848 "state": "configuring", 00:11:13.848 "raid_level": "concat", 00:11:13.848 "superblock": false, 00:11:13.848 "num_base_bdevs": 4, 00:11:13.848 "num_base_bdevs_discovered": 1, 00:11:13.848 "num_base_bdevs_operational": 4, 00:11:13.848 "base_bdevs_list": [ 00:11:13.848 { 00:11:13.848 "name": "BaseBdev1", 00:11:13.848 "uuid": "05caef54-bcfd-404e-9442-a9699bdaa691", 00:11:13.848 "is_configured": true, 00:11:13.848 "data_offset": 0, 00:11:13.848 "data_size": 65536 00:11:13.848 }, 00:11:13.848 { 00:11:13.848 "name": "BaseBdev2", 00:11:13.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.848 "is_configured": false, 00:11:13.848 "data_offset": 0, 00:11:13.848 "data_size": 0 00:11:13.848 }, 00:11:13.848 { 00:11:13.848 "name": "BaseBdev3", 00:11:13.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.848 "is_configured": false, 00:11:13.848 "data_offset": 0, 00:11:13.848 "data_size": 0 00:11:13.848 }, 00:11:13.848 { 00:11:13.848 "name": "BaseBdev4", 00:11:13.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.848 "is_configured": false, 00:11:13.848 "data_offset": 0, 00:11:13.848 "data_size": 0 00:11:13.848 } 00:11:13.848 ] 00:11:13.848 }' 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.848 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.415 [2024-11-26 18:58:05.550273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.415 [2024-11-26 18:58:05.550339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.415 [2024-11-26 18:58:05.562357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.415 [2024-11-26 18:58:05.564999] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.415 [2024-11-26 18:58:05.565180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.415 [2024-11-26 18:58:05.565307] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.415 [2024-11-26 18:58:05.565372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.415 [2024-11-26 18:58:05.565480] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.415 [2024-11-26 18:58:05.565540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.415 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.416 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.416 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.416 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.416 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.416 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.416 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.416 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.416 "name": "Existed_Raid", 00:11:14.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.416 "strip_size_kb": 64, 00:11:14.416 "state": "configuring", 00:11:14.416 "raid_level": "concat", 00:11:14.416 "superblock": false, 00:11:14.416 "num_base_bdevs": 4, 00:11:14.416 "num_base_bdevs_discovered": 1, 00:11:14.416 "num_base_bdevs_operational": 4, 00:11:14.416 "base_bdevs_list": [ 00:11:14.416 { 00:11:14.416 "name": "BaseBdev1", 00:11:14.416 "uuid": "05caef54-bcfd-404e-9442-a9699bdaa691", 00:11:14.416 "is_configured": true, 00:11:14.416 "data_offset": 0, 00:11:14.416 "data_size": 65536 00:11:14.416 }, 00:11:14.416 { 00:11:14.416 "name": "BaseBdev2", 00:11:14.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.416 "is_configured": false, 00:11:14.416 "data_offset": 0, 00:11:14.416 "data_size": 0 00:11:14.416 }, 00:11:14.416 { 00:11:14.416 "name": "BaseBdev3", 00:11:14.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.416 "is_configured": false, 00:11:14.416 "data_offset": 0, 00:11:14.416 "data_size": 0 00:11:14.416 }, 00:11:14.416 { 00:11:14.416 "name": "BaseBdev4", 00:11:14.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.416 "is_configured": false, 00:11:14.416 "data_offset": 0, 00:11:14.416 "data_size": 0 00:11:14.416 } 00:11:14.416 ] 00:11:14.416 }' 00:11:14.416 18:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.416 18:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.983 [2024-11-26 18:58:06.101446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.983 BaseBdev2 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.983 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.983 [ 00:11:14.983 { 00:11:14.983 "name": "BaseBdev2", 00:11:14.983 "aliases": [ 00:11:14.983 "b0a83035-e6e9-47c4-8570-b0349f87c157" 00:11:14.983 ], 00:11:14.983 "product_name": "Malloc disk", 00:11:14.983 "block_size": 512, 00:11:14.984 "num_blocks": 65536, 00:11:14.984 "uuid": "b0a83035-e6e9-47c4-8570-b0349f87c157", 00:11:14.984 "assigned_rate_limits": { 00:11:14.984 "rw_ios_per_sec": 0, 00:11:14.984 "rw_mbytes_per_sec": 0, 00:11:14.984 "r_mbytes_per_sec": 0, 00:11:14.984 "w_mbytes_per_sec": 0 00:11:14.984 }, 00:11:14.984 "claimed": true, 00:11:14.984 "claim_type": "exclusive_write", 00:11:14.984 "zoned": false, 00:11:14.984 "supported_io_types": { 00:11:14.984 "read": true, 00:11:14.984 "write": true, 00:11:14.984 "unmap": true, 00:11:14.984 "flush": true, 00:11:14.984 "reset": true, 00:11:14.984 "nvme_admin": false, 00:11:14.984 "nvme_io": false, 00:11:14.984 "nvme_io_md": false, 00:11:14.984 "write_zeroes": true, 00:11:14.984 "zcopy": true, 00:11:14.984 "get_zone_info": false, 00:11:14.984 "zone_management": false, 00:11:14.984 "zone_append": false, 00:11:14.984 "compare": false, 00:11:14.984 "compare_and_write": false, 00:11:14.984 "abort": true, 00:11:14.984 "seek_hole": false, 00:11:14.984 "seek_data": false, 00:11:14.984 "copy": true, 00:11:14.984 "nvme_iov_md": false 00:11:14.984 }, 00:11:14.984 "memory_domains": [ 00:11:14.984 { 00:11:14.984 "dma_device_id": "system", 00:11:14.984 "dma_device_type": 1 00:11:14.984 }, 00:11:14.984 { 00:11:14.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.984 "dma_device_type": 2 00:11:14.984 } 00:11:14.984 ], 00:11:14.984 "driver_specific": {} 00:11:14.984 } 00:11:14.984 ] 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.984 "name": "Existed_Raid", 00:11:14.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.984 "strip_size_kb": 64, 00:11:14.984 "state": "configuring", 00:11:14.984 "raid_level": "concat", 00:11:14.984 "superblock": false, 00:11:14.984 "num_base_bdevs": 4, 00:11:14.984 "num_base_bdevs_discovered": 2, 00:11:14.984 "num_base_bdevs_operational": 4, 00:11:14.984 "base_bdevs_list": [ 00:11:14.984 { 00:11:14.984 "name": "BaseBdev1", 00:11:14.984 "uuid": "05caef54-bcfd-404e-9442-a9699bdaa691", 00:11:14.984 "is_configured": true, 00:11:14.984 "data_offset": 0, 00:11:14.984 "data_size": 65536 00:11:14.984 }, 00:11:14.984 { 00:11:14.984 "name": "BaseBdev2", 00:11:14.984 "uuid": "b0a83035-e6e9-47c4-8570-b0349f87c157", 00:11:14.984 "is_configured": true, 00:11:14.984 "data_offset": 0, 00:11:14.984 "data_size": 65536 00:11:14.984 }, 00:11:14.984 { 00:11:14.984 "name": "BaseBdev3", 00:11:14.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.984 "is_configured": false, 00:11:14.984 "data_offset": 0, 00:11:14.984 "data_size": 0 00:11:14.984 }, 00:11:14.984 { 00:11:14.984 "name": "BaseBdev4", 00:11:14.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.984 "is_configured": false, 00:11:14.984 "data_offset": 0, 00:11:14.984 "data_size": 0 00:11:14.984 } 00:11:14.984 ] 00:11:14.984 }' 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.984 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.552 [2024-11-26 18:58:06.715039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.552 BaseBdev3 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.552 [ 00:11:15.552 { 00:11:15.552 "name": "BaseBdev3", 00:11:15.552 "aliases": [ 00:11:15.552 "12b5a95f-472a-4ef2-bcd2-89c5dbaa24f2" 00:11:15.552 ], 00:11:15.552 "product_name": "Malloc disk", 00:11:15.552 "block_size": 512, 00:11:15.552 "num_blocks": 65536, 00:11:15.552 "uuid": "12b5a95f-472a-4ef2-bcd2-89c5dbaa24f2", 00:11:15.552 "assigned_rate_limits": { 00:11:15.552 "rw_ios_per_sec": 0, 00:11:15.552 "rw_mbytes_per_sec": 0, 00:11:15.552 "r_mbytes_per_sec": 0, 00:11:15.552 "w_mbytes_per_sec": 0 00:11:15.552 }, 00:11:15.552 "claimed": true, 00:11:15.552 "claim_type": "exclusive_write", 00:11:15.552 "zoned": false, 00:11:15.552 "supported_io_types": { 00:11:15.552 "read": true, 00:11:15.552 "write": true, 00:11:15.552 "unmap": true, 00:11:15.552 "flush": true, 00:11:15.552 "reset": true, 00:11:15.552 "nvme_admin": false, 00:11:15.552 "nvme_io": false, 00:11:15.552 "nvme_io_md": false, 00:11:15.552 "write_zeroes": true, 00:11:15.552 "zcopy": true, 00:11:15.552 "get_zone_info": false, 00:11:15.552 "zone_management": false, 00:11:15.552 "zone_append": false, 00:11:15.552 "compare": false, 00:11:15.552 "compare_and_write": false, 00:11:15.552 "abort": true, 00:11:15.552 "seek_hole": false, 00:11:15.552 "seek_data": false, 00:11:15.552 "copy": true, 00:11:15.552 "nvme_iov_md": false 00:11:15.552 }, 00:11:15.552 "memory_domains": [ 00:11:15.552 { 00:11:15.552 "dma_device_id": "system", 00:11:15.552 "dma_device_type": 1 00:11:15.552 }, 00:11:15.552 { 00:11:15.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.552 "dma_device_type": 2 00:11:15.552 } 00:11:15.552 ], 00:11:15.552 "driver_specific": {} 00:11:15.552 } 00:11:15.552 ] 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.552 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.552 "name": "Existed_Raid", 00:11:15.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.552 "strip_size_kb": 64, 00:11:15.552 "state": "configuring", 00:11:15.552 "raid_level": "concat", 00:11:15.552 "superblock": false, 00:11:15.552 "num_base_bdevs": 4, 00:11:15.552 "num_base_bdevs_discovered": 3, 00:11:15.552 "num_base_bdevs_operational": 4, 00:11:15.552 "base_bdevs_list": [ 00:11:15.552 { 00:11:15.552 "name": "BaseBdev1", 00:11:15.552 "uuid": "05caef54-bcfd-404e-9442-a9699bdaa691", 00:11:15.552 "is_configured": true, 00:11:15.552 "data_offset": 0, 00:11:15.552 "data_size": 65536 00:11:15.552 }, 00:11:15.552 { 00:11:15.552 "name": "BaseBdev2", 00:11:15.553 "uuid": "b0a83035-e6e9-47c4-8570-b0349f87c157", 00:11:15.553 "is_configured": true, 00:11:15.553 "data_offset": 0, 00:11:15.553 "data_size": 65536 00:11:15.553 }, 00:11:15.553 { 00:11:15.553 "name": "BaseBdev3", 00:11:15.553 "uuid": "12b5a95f-472a-4ef2-bcd2-89c5dbaa24f2", 00:11:15.553 "is_configured": true, 00:11:15.553 "data_offset": 0, 00:11:15.553 "data_size": 65536 00:11:15.553 }, 00:11:15.553 { 00:11:15.553 "name": "BaseBdev4", 00:11:15.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.553 "is_configured": false, 00:11:15.553 "data_offset": 0, 00:11:15.553 "data_size": 0 00:11:15.553 } 00:11:15.553 ] 00:11:15.553 }' 00:11:15.553 18:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.553 18:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.119 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.119 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.120 [2024-11-26 18:58:07.338675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.120 [2024-11-26 18:58:07.338750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.120 [2024-11-26 18:58:07.338763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:16.120 [2024-11-26 18:58:07.339163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.120 [2024-11-26 18:58:07.339392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.120 [2024-11-26 18:58:07.339431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:16.120 [2024-11-26 18:58:07.339761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.120 BaseBdev4 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.120 [ 00:11:16.120 { 00:11:16.120 "name": "BaseBdev4", 00:11:16.120 "aliases": [ 00:11:16.120 "86564521-f874-4c69-8a06-4233ea3d7837" 00:11:16.120 ], 00:11:16.120 "product_name": "Malloc disk", 00:11:16.120 "block_size": 512, 00:11:16.120 "num_blocks": 65536, 00:11:16.120 "uuid": "86564521-f874-4c69-8a06-4233ea3d7837", 00:11:16.120 "assigned_rate_limits": { 00:11:16.120 "rw_ios_per_sec": 0, 00:11:16.120 "rw_mbytes_per_sec": 0, 00:11:16.120 "r_mbytes_per_sec": 0, 00:11:16.120 "w_mbytes_per_sec": 0 00:11:16.120 }, 00:11:16.120 "claimed": true, 00:11:16.120 "claim_type": "exclusive_write", 00:11:16.120 "zoned": false, 00:11:16.120 "supported_io_types": { 00:11:16.120 "read": true, 00:11:16.120 "write": true, 00:11:16.120 "unmap": true, 00:11:16.120 "flush": true, 00:11:16.120 "reset": true, 00:11:16.120 "nvme_admin": false, 00:11:16.120 "nvme_io": false, 00:11:16.120 "nvme_io_md": false, 00:11:16.120 "write_zeroes": true, 00:11:16.120 "zcopy": true, 00:11:16.120 "get_zone_info": false, 00:11:16.120 "zone_management": false, 00:11:16.120 "zone_append": false, 00:11:16.120 "compare": false, 00:11:16.120 "compare_and_write": false, 00:11:16.120 "abort": true, 00:11:16.120 "seek_hole": false, 00:11:16.120 "seek_data": false, 00:11:16.120 "copy": true, 00:11:16.120 "nvme_iov_md": false 00:11:16.120 }, 00:11:16.120 "memory_domains": [ 00:11:16.120 { 00:11:16.120 "dma_device_id": "system", 00:11:16.120 "dma_device_type": 1 00:11:16.120 }, 00:11:16.120 { 00:11:16.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.120 "dma_device_type": 2 00:11:16.120 } 00:11:16.120 ], 00:11:16.120 "driver_specific": {} 00:11:16.120 } 00:11:16.120 ] 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.120 "name": "Existed_Raid", 00:11:16.120 "uuid": "65188ecb-f410-4810-9759-6cf005669c27", 00:11:16.120 "strip_size_kb": 64, 00:11:16.120 "state": "online", 00:11:16.120 "raid_level": "concat", 00:11:16.120 "superblock": false, 00:11:16.120 "num_base_bdevs": 4, 00:11:16.120 "num_base_bdevs_discovered": 4, 00:11:16.120 "num_base_bdevs_operational": 4, 00:11:16.120 "base_bdevs_list": [ 00:11:16.120 { 00:11:16.120 "name": "BaseBdev1", 00:11:16.120 "uuid": "05caef54-bcfd-404e-9442-a9699bdaa691", 00:11:16.120 "is_configured": true, 00:11:16.120 "data_offset": 0, 00:11:16.120 "data_size": 65536 00:11:16.120 }, 00:11:16.120 { 00:11:16.120 "name": "BaseBdev2", 00:11:16.120 "uuid": "b0a83035-e6e9-47c4-8570-b0349f87c157", 00:11:16.120 "is_configured": true, 00:11:16.120 "data_offset": 0, 00:11:16.120 "data_size": 65536 00:11:16.120 }, 00:11:16.120 { 00:11:16.120 "name": "BaseBdev3", 00:11:16.120 "uuid": "12b5a95f-472a-4ef2-bcd2-89c5dbaa24f2", 00:11:16.120 "is_configured": true, 00:11:16.120 "data_offset": 0, 00:11:16.120 "data_size": 65536 00:11:16.120 }, 00:11:16.120 { 00:11:16.120 "name": "BaseBdev4", 00:11:16.120 "uuid": "86564521-f874-4c69-8a06-4233ea3d7837", 00:11:16.120 "is_configured": true, 00:11:16.120 "data_offset": 0, 00:11:16.120 "data_size": 65536 00:11:16.120 } 00:11:16.120 ] 00:11:16.120 }' 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.120 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.691 [2024-11-26 18:58:07.883445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.691 "name": "Existed_Raid", 00:11:16.691 "aliases": [ 00:11:16.691 "65188ecb-f410-4810-9759-6cf005669c27" 00:11:16.691 ], 00:11:16.691 "product_name": "Raid Volume", 00:11:16.691 "block_size": 512, 00:11:16.691 "num_blocks": 262144, 00:11:16.691 "uuid": "65188ecb-f410-4810-9759-6cf005669c27", 00:11:16.691 "assigned_rate_limits": { 00:11:16.691 "rw_ios_per_sec": 0, 00:11:16.691 "rw_mbytes_per_sec": 0, 00:11:16.691 "r_mbytes_per_sec": 0, 00:11:16.691 "w_mbytes_per_sec": 0 00:11:16.691 }, 00:11:16.691 "claimed": false, 00:11:16.691 "zoned": false, 00:11:16.691 "supported_io_types": { 00:11:16.691 "read": true, 00:11:16.691 "write": true, 00:11:16.691 "unmap": true, 00:11:16.691 "flush": true, 00:11:16.691 "reset": true, 00:11:16.691 "nvme_admin": false, 00:11:16.691 "nvme_io": false, 00:11:16.691 "nvme_io_md": false, 00:11:16.691 "write_zeroes": true, 00:11:16.691 "zcopy": false, 00:11:16.691 "get_zone_info": false, 00:11:16.691 "zone_management": false, 00:11:16.691 "zone_append": false, 00:11:16.691 "compare": false, 00:11:16.691 "compare_and_write": false, 00:11:16.691 "abort": false, 00:11:16.691 "seek_hole": false, 00:11:16.691 "seek_data": false, 00:11:16.691 "copy": false, 00:11:16.691 "nvme_iov_md": false 00:11:16.691 }, 00:11:16.691 "memory_domains": [ 00:11:16.691 { 00:11:16.691 "dma_device_id": "system", 00:11:16.691 "dma_device_type": 1 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.691 "dma_device_type": 2 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "dma_device_id": "system", 00:11:16.691 "dma_device_type": 1 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.691 "dma_device_type": 2 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "dma_device_id": "system", 00:11:16.691 "dma_device_type": 1 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.691 "dma_device_type": 2 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "dma_device_id": "system", 00:11:16.691 "dma_device_type": 1 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.691 "dma_device_type": 2 00:11:16.691 } 00:11:16.691 ], 00:11:16.691 "driver_specific": { 00:11:16.691 "raid": { 00:11:16.691 "uuid": "65188ecb-f410-4810-9759-6cf005669c27", 00:11:16.691 "strip_size_kb": 64, 00:11:16.691 "state": "online", 00:11:16.691 "raid_level": "concat", 00:11:16.691 "superblock": false, 00:11:16.691 "num_base_bdevs": 4, 00:11:16.691 "num_base_bdevs_discovered": 4, 00:11:16.691 "num_base_bdevs_operational": 4, 00:11:16.691 "base_bdevs_list": [ 00:11:16.691 { 00:11:16.691 "name": "BaseBdev1", 00:11:16.691 "uuid": "05caef54-bcfd-404e-9442-a9699bdaa691", 00:11:16.691 "is_configured": true, 00:11:16.691 "data_offset": 0, 00:11:16.691 "data_size": 65536 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "name": "BaseBdev2", 00:11:16.691 "uuid": "b0a83035-e6e9-47c4-8570-b0349f87c157", 00:11:16.691 "is_configured": true, 00:11:16.691 "data_offset": 0, 00:11:16.691 "data_size": 65536 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "name": "BaseBdev3", 00:11:16.691 "uuid": "12b5a95f-472a-4ef2-bcd2-89c5dbaa24f2", 00:11:16.691 "is_configured": true, 00:11:16.691 "data_offset": 0, 00:11:16.691 "data_size": 65536 00:11:16.691 }, 00:11:16.691 { 00:11:16.691 "name": "BaseBdev4", 00:11:16.691 "uuid": "86564521-f874-4c69-8a06-4233ea3d7837", 00:11:16.691 "is_configured": true, 00:11:16.691 "data_offset": 0, 00:11:16.691 "data_size": 65536 00:11:16.691 } 00:11:16.691 ] 00:11:16.691 } 00:11:16.691 } 00:11:16.691 }' 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:16.691 BaseBdev2 00:11:16.691 BaseBdev3 00:11:16.691 BaseBdev4' 00:11:16.691 18:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.691 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.691 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.691 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:16.691 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.691 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.691 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.951 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.951 [2024-11-26 18:58:08.275197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.951 [2024-11-26 18:58:08.275241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.951 [2024-11-26 18:58:08.275344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.212 "name": "Existed_Raid", 00:11:17.212 "uuid": "65188ecb-f410-4810-9759-6cf005669c27", 00:11:17.212 "strip_size_kb": 64, 00:11:17.212 "state": "offline", 00:11:17.212 "raid_level": "concat", 00:11:17.212 "superblock": false, 00:11:17.212 "num_base_bdevs": 4, 00:11:17.212 "num_base_bdevs_discovered": 3, 00:11:17.212 "num_base_bdevs_operational": 3, 00:11:17.212 "base_bdevs_list": [ 00:11:17.212 { 00:11:17.212 "name": null, 00:11:17.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.212 "is_configured": false, 00:11:17.212 "data_offset": 0, 00:11:17.212 "data_size": 65536 00:11:17.212 }, 00:11:17.212 { 00:11:17.212 "name": "BaseBdev2", 00:11:17.212 "uuid": "b0a83035-e6e9-47c4-8570-b0349f87c157", 00:11:17.212 "is_configured": true, 00:11:17.212 "data_offset": 0, 00:11:17.212 "data_size": 65536 00:11:17.212 }, 00:11:17.212 { 00:11:17.212 "name": "BaseBdev3", 00:11:17.212 "uuid": "12b5a95f-472a-4ef2-bcd2-89c5dbaa24f2", 00:11:17.212 "is_configured": true, 00:11:17.212 "data_offset": 0, 00:11:17.212 "data_size": 65536 00:11:17.212 }, 00:11:17.212 { 00:11:17.212 "name": "BaseBdev4", 00:11:17.212 "uuid": "86564521-f874-4c69-8a06-4233ea3d7837", 00:11:17.212 "is_configured": true, 00:11:17.212 "data_offset": 0, 00:11:17.212 "data_size": 65536 00:11:17.212 } 00:11:17.212 ] 00:11:17.212 }' 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.212 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.781 18:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.781 [2024-11-26 18:58:08.968536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.781 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:17.782 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.782 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.782 [2024-11-26 18:58:09.123546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.046 [2024-11-26 18:58:09.274920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:18.046 [2024-11-26 18:58:09.275004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.046 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.047 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.047 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.310 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:18.310 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:18.310 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:18.310 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.311 BaseBdev2 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.311 [ 00:11:18.311 { 00:11:18.311 "name": "BaseBdev2", 00:11:18.311 "aliases": [ 00:11:18.311 "f4469695-6466-4d03-b138-9d4c068030f1" 00:11:18.311 ], 00:11:18.311 "product_name": "Malloc disk", 00:11:18.311 "block_size": 512, 00:11:18.311 "num_blocks": 65536, 00:11:18.311 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:18.311 "assigned_rate_limits": { 00:11:18.311 "rw_ios_per_sec": 0, 00:11:18.311 "rw_mbytes_per_sec": 0, 00:11:18.311 "r_mbytes_per_sec": 0, 00:11:18.311 "w_mbytes_per_sec": 0 00:11:18.311 }, 00:11:18.311 "claimed": false, 00:11:18.311 "zoned": false, 00:11:18.311 "supported_io_types": { 00:11:18.311 "read": true, 00:11:18.311 "write": true, 00:11:18.311 "unmap": true, 00:11:18.311 "flush": true, 00:11:18.311 "reset": true, 00:11:18.311 "nvme_admin": false, 00:11:18.311 "nvme_io": false, 00:11:18.311 "nvme_io_md": false, 00:11:18.311 "write_zeroes": true, 00:11:18.311 "zcopy": true, 00:11:18.311 "get_zone_info": false, 00:11:18.311 "zone_management": false, 00:11:18.311 "zone_append": false, 00:11:18.311 "compare": false, 00:11:18.311 "compare_and_write": false, 00:11:18.311 "abort": true, 00:11:18.311 "seek_hole": false, 00:11:18.311 "seek_data": false, 00:11:18.311 "copy": true, 00:11:18.311 "nvme_iov_md": false 00:11:18.311 }, 00:11:18.311 "memory_domains": [ 00:11:18.311 { 00:11:18.311 "dma_device_id": "system", 00:11:18.311 "dma_device_type": 1 00:11:18.311 }, 00:11:18.311 { 00:11:18.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.311 "dma_device_type": 2 00:11:18.311 } 00:11:18.311 ], 00:11:18.311 "driver_specific": {} 00:11:18.311 } 00:11:18.311 ] 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.311 BaseBdev3 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.311 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.311 [ 00:11:18.311 { 00:11:18.311 "name": "BaseBdev3", 00:11:18.311 "aliases": [ 00:11:18.311 "b6fab27c-c8e8-44d8-9613-a3c914fc0726" 00:11:18.311 ], 00:11:18.311 "product_name": "Malloc disk", 00:11:18.311 "block_size": 512, 00:11:18.311 "num_blocks": 65536, 00:11:18.311 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:18.311 "assigned_rate_limits": { 00:11:18.311 "rw_ios_per_sec": 0, 00:11:18.311 "rw_mbytes_per_sec": 0, 00:11:18.311 "r_mbytes_per_sec": 0, 00:11:18.311 "w_mbytes_per_sec": 0 00:11:18.311 }, 00:11:18.311 "claimed": false, 00:11:18.311 "zoned": false, 00:11:18.311 "supported_io_types": { 00:11:18.311 "read": true, 00:11:18.311 "write": true, 00:11:18.311 "unmap": true, 00:11:18.311 "flush": true, 00:11:18.311 "reset": true, 00:11:18.311 "nvme_admin": false, 00:11:18.311 "nvme_io": false, 00:11:18.311 "nvme_io_md": false, 00:11:18.311 "write_zeroes": true, 00:11:18.311 "zcopy": true, 00:11:18.311 "get_zone_info": false, 00:11:18.311 "zone_management": false, 00:11:18.311 "zone_append": false, 00:11:18.311 "compare": false, 00:11:18.311 "compare_and_write": false, 00:11:18.311 "abort": true, 00:11:18.311 "seek_hole": false, 00:11:18.311 "seek_data": false, 00:11:18.311 "copy": true, 00:11:18.311 "nvme_iov_md": false 00:11:18.311 }, 00:11:18.311 "memory_domains": [ 00:11:18.311 { 00:11:18.311 "dma_device_id": "system", 00:11:18.311 "dma_device_type": 1 00:11:18.311 }, 00:11:18.311 { 00:11:18.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.311 "dma_device_type": 2 00:11:18.312 } 00:11:18.312 ], 00:11:18.312 "driver_specific": {} 00:11:18.312 } 00:11:18.312 ] 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.312 BaseBdev4 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.312 [ 00:11:18.312 { 00:11:18.312 "name": "BaseBdev4", 00:11:18.312 "aliases": [ 00:11:18.312 "9a988f0c-7b16-44bb-b087-20c27bb7d5a9" 00:11:18.312 ], 00:11:18.312 "product_name": "Malloc disk", 00:11:18.312 "block_size": 512, 00:11:18.312 "num_blocks": 65536, 00:11:18.312 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:18.312 "assigned_rate_limits": { 00:11:18.312 "rw_ios_per_sec": 0, 00:11:18.312 "rw_mbytes_per_sec": 0, 00:11:18.312 "r_mbytes_per_sec": 0, 00:11:18.312 "w_mbytes_per_sec": 0 00:11:18.312 }, 00:11:18.312 "claimed": false, 00:11:18.312 "zoned": false, 00:11:18.312 "supported_io_types": { 00:11:18.312 "read": true, 00:11:18.312 "write": true, 00:11:18.312 "unmap": true, 00:11:18.312 "flush": true, 00:11:18.312 "reset": true, 00:11:18.312 "nvme_admin": false, 00:11:18.312 "nvme_io": false, 00:11:18.312 "nvme_io_md": false, 00:11:18.312 "write_zeroes": true, 00:11:18.312 "zcopy": true, 00:11:18.312 "get_zone_info": false, 00:11:18.312 "zone_management": false, 00:11:18.312 "zone_append": false, 00:11:18.312 "compare": false, 00:11:18.312 "compare_and_write": false, 00:11:18.312 "abort": true, 00:11:18.312 "seek_hole": false, 00:11:18.312 "seek_data": false, 00:11:18.312 "copy": true, 00:11:18.312 "nvme_iov_md": false 00:11:18.312 }, 00:11:18.312 "memory_domains": [ 00:11:18.312 { 00:11:18.312 "dma_device_id": "system", 00:11:18.312 "dma_device_type": 1 00:11:18.312 }, 00:11:18.312 { 00:11:18.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.312 "dma_device_type": 2 00:11:18.312 } 00:11:18.312 ], 00:11:18.312 "driver_specific": {} 00:11:18.312 } 00:11:18.312 ] 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.312 [2024-11-26 18:58:09.660721] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.312 [2024-11-26 18:58:09.660788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.312 [2024-11-26 18:58:09.660830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.312 [2024-11-26 18:58:09.663441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.312 [2024-11-26 18:58:09.663522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.312 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.576 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.576 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.576 "name": "Existed_Raid", 00:11:18.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.576 "strip_size_kb": 64, 00:11:18.576 "state": "configuring", 00:11:18.576 "raid_level": "concat", 00:11:18.576 "superblock": false, 00:11:18.576 "num_base_bdevs": 4, 00:11:18.576 "num_base_bdevs_discovered": 3, 00:11:18.576 "num_base_bdevs_operational": 4, 00:11:18.576 "base_bdevs_list": [ 00:11:18.576 { 00:11:18.576 "name": "BaseBdev1", 00:11:18.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.576 "is_configured": false, 00:11:18.576 "data_offset": 0, 00:11:18.576 "data_size": 0 00:11:18.576 }, 00:11:18.576 { 00:11:18.576 "name": "BaseBdev2", 00:11:18.576 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:18.576 "is_configured": true, 00:11:18.576 "data_offset": 0, 00:11:18.576 "data_size": 65536 00:11:18.576 }, 00:11:18.576 { 00:11:18.576 "name": "BaseBdev3", 00:11:18.576 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:18.576 "is_configured": true, 00:11:18.576 "data_offset": 0, 00:11:18.576 "data_size": 65536 00:11:18.576 }, 00:11:18.576 { 00:11:18.576 "name": "BaseBdev4", 00:11:18.576 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:18.576 "is_configured": true, 00:11:18.576 "data_offset": 0, 00:11:18.576 "data_size": 65536 00:11:18.576 } 00:11:18.576 ] 00:11:18.576 }' 00:11:18.576 18:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.576 18:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.143 [2024-11-26 18:58:10.204866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.143 "name": "Existed_Raid", 00:11:19.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.143 "strip_size_kb": 64, 00:11:19.143 "state": "configuring", 00:11:19.143 "raid_level": "concat", 00:11:19.143 "superblock": false, 00:11:19.143 "num_base_bdevs": 4, 00:11:19.143 "num_base_bdevs_discovered": 2, 00:11:19.143 "num_base_bdevs_operational": 4, 00:11:19.143 "base_bdevs_list": [ 00:11:19.143 { 00:11:19.143 "name": "BaseBdev1", 00:11:19.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.143 "is_configured": false, 00:11:19.143 "data_offset": 0, 00:11:19.143 "data_size": 0 00:11:19.143 }, 00:11:19.143 { 00:11:19.143 "name": null, 00:11:19.143 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:19.143 "is_configured": false, 00:11:19.143 "data_offset": 0, 00:11:19.143 "data_size": 65536 00:11:19.143 }, 00:11:19.143 { 00:11:19.143 "name": "BaseBdev3", 00:11:19.143 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:19.143 "is_configured": true, 00:11:19.143 "data_offset": 0, 00:11:19.143 "data_size": 65536 00:11:19.143 }, 00:11:19.143 { 00:11:19.143 "name": "BaseBdev4", 00:11:19.143 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:19.143 "is_configured": true, 00:11:19.143 "data_offset": 0, 00:11:19.143 "data_size": 65536 00:11:19.143 } 00:11:19.143 ] 00:11:19.143 }' 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.143 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.402 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.402 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.402 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.402 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.402 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.402 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:19.402 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.402 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.402 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.662 [2024-11-26 18:58:10.800245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.662 BaseBdev1 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.662 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.662 [ 00:11:19.662 { 00:11:19.662 "name": "BaseBdev1", 00:11:19.662 "aliases": [ 00:11:19.662 "9f5bb740-479f-4fec-a6db-9080a198964f" 00:11:19.662 ], 00:11:19.662 "product_name": "Malloc disk", 00:11:19.662 "block_size": 512, 00:11:19.662 "num_blocks": 65536, 00:11:19.662 "uuid": "9f5bb740-479f-4fec-a6db-9080a198964f", 00:11:19.662 "assigned_rate_limits": { 00:11:19.662 "rw_ios_per_sec": 0, 00:11:19.662 "rw_mbytes_per_sec": 0, 00:11:19.662 "r_mbytes_per_sec": 0, 00:11:19.662 "w_mbytes_per_sec": 0 00:11:19.662 }, 00:11:19.662 "claimed": true, 00:11:19.662 "claim_type": "exclusive_write", 00:11:19.662 "zoned": false, 00:11:19.662 "supported_io_types": { 00:11:19.662 "read": true, 00:11:19.662 "write": true, 00:11:19.662 "unmap": true, 00:11:19.662 "flush": true, 00:11:19.662 "reset": true, 00:11:19.662 "nvme_admin": false, 00:11:19.662 "nvme_io": false, 00:11:19.662 "nvme_io_md": false, 00:11:19.662 "write_zeroes": true, 00:11:19.662 "zcopy": true, 00:11:19.662 "get_zone_info": false, 00:11:19.662 "zone_management": false, 00:11:19.662 "zone_append": false, 00:11:19.662 "compare": false, 00:11:19.662 "compare_and_write": false, 00:11:19.662 "abort": true, 00:11:19.662 "seek_hole": false, 00:11:19.662 "seek_data": false, 00:11:19.662 "copy": true, 00:11:19.662 "nvme_iov_md": false 00:11:19.662 }, 00:11:19.662 "memory_domains": [ 00:11:19.662 { 00:11:19.662 "dma_device_id": "system", 00:11:19.662 "dma_device_type": 1 00:11:19.662 }, 00:11:19.662 { 00:11:19.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.662 "dma_device_type": 2 00:11:19.663 } 00:11:19.663 ], 00:11:19.663 "driver_specific": {} 00:11:19.663 } 00:11:19.663 ] 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.663 "name": "Existed_Raid", 00:11:19.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.663 "strip_size_kb": 64, 00:11:19.663 "state": "configuring", 00:11:19.663 "raid_level": "concat", 00:11:19.663 "superblock": false, 00:11:19.663 "num_base_bdevs": 4, 00:11:19.663 "num_base_bdevs_discovered": 3, 00:11:19.663 "num_base_bdevs_operational": 4, 00:11:19.663 "base_bdevs_list": [ 00:11:19.663 { 00:11:19.663 "name": "BaseBdev1", 00:11:19.663 "uuid": "9f5bb740-479f-4fec-a6db-9080a198964f", 00:11:19.663 "is_configured": true, 00:11:19.663 "data_offset": 0, 00:11:19.663 "data_size": 65536 00:11:19.663 }, 00:11:19.663 { 00:11:19.663 "name": null, 00:11:19.663 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:19.663 "is_configured": false, 00:11:19.663 "data_offset": 0, 00:11:19.663 "data_size": 65536 00:11:19.663 }, 00:11:19.663 { 00:11:19.663 "name": "BaseBdev3", 00:11:19.663 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:19.663 "is_configured": true, 00:11:19.663 "data_offset": 0, 00:11:19.663 "data_size": 65536 00:11:19.663 }, 00:11:19.663 { 00:11:19.663 "name": "BaseBdev4", 00:11:19.663 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:19.663 "is_configured": true, 00:11:19.663 "data_offset": 0, 00:11:19.663 "data_size": 65536 00:11:19.663 } 00:11:19.663 ] 00:11:19.663 }' 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.663 18:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.231 [2024-11-26 18:58:11.424623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.231 "name": "Existed_Raid", 00:11:20.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.231 "strip_size_kb": 64, 00:11:20.231 "state": "configuring", 00:11:20.231 "raid_level": "concat", 00:11:20.231 "superblock": false, 00:11:20.231 "num_base_bdevs": 4, 00:11:20.231 "num_base_bdevs_discovered": 2, 00:11:20.231 "num_base_bdevs_operational": 4, 00:11:20.231 "base_bdevs_list": [ 00:11:20.231 { 00:11:20.231 "name": "BaseBdev1", 00:11:20.231 "uuid": "9f5bb740-479f-4fec-a6db-9080a198964f", 00:11:20.231 "is_configured": true, 00:11:20.231 "data_offset": 0, 00:11:20.231 "data_size": 65536 00:11:20.231 }, 00:11:20.231 { 00:11:20.231 "name": null, 00:11:20.231 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:20.231 "is_configured": false, 00:11:20.231 "data_offset": 0, 00:11:20.231 "data_size": 65536 00:11:20.231 }, 00:11:20.231 { 00:11:20.231 "name": null, 00:11:20.231 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:20.231 "is_configured": false, 00:11:20.231 "data_offset": 0, 00:11:20.231 "data_size": 65536 00:11:20.231 }, 00:11:20.231 { 00:11:20.231 "name": "BaseBdev4", 00:11:20.231 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:20.231 "is_configured": true, 00:11:20.231 "data_offset": 0, 00:11:20.231 "data_size": 65536 00:11:20.231 } 00:11:20.231 ] 00:11:20.231 }' 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.231 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.798 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.798 18:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.798 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.798 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.798 18:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.798 [2024-11-26 18:58:12.012776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.798 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.798 "name": "Existed_Raid", 00:11:20.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.798 "strip_size_kb": 64, 00:11:20.798 "state": "configuring", 00:11:20.798 "raid_level": "concat", 00:11:20.798 "superblock": false, 00:11:20.798 "num_base_bdevs": 4, 00:11:20.798 "num_base_bdevs_discovered": 3, 00:11:20.799 "num_base_bdevs_operational": 4, 00:11:20.799 "base_bdevs_list": [ 00:11:20.799 { 00:11:20.799 "name": "BaseBdev1", 00:11:20.799 "uuid": "9f5bb740-479f-4fec-a6db-9080a198964f", 00:11:20.799 "is_configured": true, 00:11:20.799 "data_offset": 0, 00:11:20.799 "data_size": 65536 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "name": null, 00:11:20.799 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:20.799 "is_configured": false, 00:11:20.799 "data_offset": 0, 00:11:20.799 "data_size": 65536 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "name": "BaseBdev3", 00:11:20.799 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:20.799 "is_configured": true, 00:11:20.799 "data_offset": 0, 00:11:20.799 "data_size": 65536 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "name": "BaseBdev4", 00:11:20.799 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:20.799 "is_configured": true, 00:11:20.799 "data_offset": 0, 00:11:20.799 "data_size": 65536 00:11:20.799 } 00:11:20.799 ] 00:11:20.799 }' 00:11:20.799 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.799 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.367 [2024-11-26 18:58:12.628990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.367 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.627 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.627 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.627 "name": "Existed_Raid", 00:11:21.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.627 "strip_size_kb": 64, 00:11:21.627 "state": "configuring", 00:11:21.627 "raid_level": "concat", 00:11:21.627 "superblock": false, 00:11:21.627 "num_base_bdevs": 4, 00:11:21.627 "num_base_bdevs_discovered": 2, 00:11:21.627 "num_base_bdevs_operational": 4, 00:11:21.627 "base_bdevs_list": [ 00:11:21.627 { 00:11:21.627 "name": null, 00:11:21.627 "uuid": "9f5bb740-479f-4fec-a6db-9080a198964f", 00:11:21.627 "is_configured": false, 00:11:21.627 "data_offset": 0, 00:11:21.627 "data_size": 65536 00:11:21.627 }, 00:11:21.627 { 00:11:21.627 "name": null, 00:11:21.627 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:21.627 "is_configured": false, 00:11:21.627 "data_offset": 0, 00:11:21.627 "data_size": 65536 00:11:21.627 }, 00:11:21.627 { 00:11:21.627 "name": "BaseBdev3", 00:11:21.627 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:21.627 "is_configured": true, 00:11:21.627 "data_offset": 0, 00:11:21.627 "data_size": 65536 00:11:21.627 }, 00:11:21.627 { 00:11:21.627 "name": "BaseBdev4", 00:11:21.627 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:21.627 "is_configured": true, 00:11:21.627 "data_offset": 0, 00:11:21.627 "data_size": 65536 00:11:21.627 } 00:11:21.627 ] 00:11:21.627 }' 00:11:21.627 18:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.627 18:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.887 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.887 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.888 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.888 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.146 [2024-11-26 18:58:13.297807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.146 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.147 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.147 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.147 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.147 "name": "Existed_Raid", 00:11:22.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.147 "strip_size_kb": 64, 00:11:22.147 "state": "configuring", 00:11:22.147 "raid_level": "concat", 00:11:22.147 "superblock": false, 00:11:22.147 "num_base_bdevs": 4, 00:11:22.147 "num_base_bdevs_discovered": 3, 00:11:22.147 "num_base_bdevs_operational": 4, 00:11:22.147 "base_bdevs_list": [ 00:11:22.147 { 00:11:22.147 "name": null, 00:11:22.147 "uuid": "9f5bb740-479f-4fec-a6db-9080a198964f", 00:11:22.147 "is_configured": false, 00:11:22.147 "data_offset": 0, 00:11:22.147 "data_size": 65536 00:11:22.147 }, 00:11:22.147 { 00:11:22.147 "name": "BaseBdev2", 00:11:22.147 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:22.147 "is_configured": true, 00:11:22.147 "data_offset": 0, 00:11:22.147 "data_size": 65536 00:11:22.147 }, 00:11:22.147 { 00:11:22.147 "name": "BaseBdev3", 00:11:22.147 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:22.147 "is_configured": true, 00:11:22.147 "data_offset": 0, 00:11:22.147 "data_size": 65536 00:11:22.147 }, 00:11:22.147 { 00:11:22.147 "name": "BaseBdev4", 00:11:22.147 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:22.147 "is_configured": true, 00:11:22.147 "data_offset": 0, 00:11:22.147 "data_size": 65536 00:11:22.147 } 00:11:22.147 ] 00:11:22.147 }' 00:11:22.147 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.147 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.770 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.770 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.770 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.770 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f5bb740-479f-4fec-a6db-9080a198964f 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.771 [2024-11-26 18:58:13.989258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:22.771 [2024-11-26 18:58:13.989331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:22.771 [2024-11-26 18:58:13.989343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:22.771 [2024-11-26 18:58:13.989699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:22.771 [2024-11-26 18:58:13.989910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:22.771 [2024-11-26 18:58:13.989931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:22.771 [2024-11-26 18:58:13.990237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.771 NewBaseBdev 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.771 18:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.771 [ 00:11:22.771 { 00:11:22.771 "name": "NewBaseBdev", 00:11:22.771 "aliases": [ 00:11:22.771 "9f5bb740-479f-4fec-a6db-9080a198964f" 00:11:22.771 ], 00:11:22.771 "product_name": "Malloc disk", 00:11:22.771 "block_size": 512, 00:11:22.771 "num_blocks": 65536, 00:11:22.771 "uuid": "9f5bb740-479f-4fec-a6db-9080a198964f", 00:11:22.771 "assigned_rate_limits": { 00:11:22.771 "rw_ios_per_sec": 0, 00:11:22.771 "rw_mbytes_per_sec": 0, 00:11:22.771 "r_mbytes_per_sec": 0, 00:11:22.771 "w_mbytes_per_sec": 0 00:11:22.771 }, 00:11:22.771 "claimed": true, 00:11:22.771 "claim_type": "exclusive_write", 00:11:22.771 "zoned": false, 00:11:22.771 "supported_io_types": { 00:11:22.771 "read": true, 00:11:22.771 "write": true, 00:11:22.771 "unmap": true, 00:11:22.771 "flush": true, 00:11:22.771 "reset": true, 00:11:22.771 "nvme_admin": false, 00:11:22.771 "nvme_io": false, 00:11:22.771 "nvme_io_md": false, 00:11:22.771 "write_zeroes": true, 00:11:22.771 "zcopy": true, 00:11:22.771 "get_zone_info": false, 00:11:22.771 "zone_management": false, 00:11:22.771 "zone_append": false, 00:11:22.771 "compare": false, 00:11:22.771 "compare_and_write": false, 00:11:22.771 "abort": true, 00:11:22.771 "seek_hole": false, 00:11:22.771 "seek_data": false, 00:11:22.771 "copy": true, 00:11:22.771 "nvme_iov_md": false 00:11:22.771 }, 00:11:22.771 "memory_domains": [ 00:11:22.771 { 00:11:22.771 "dma_device_id": "system", 00:11:22.771 "dma_device_type": 1 00:11:22.771 }, 00:11:22.771 { 00:11:22.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.771 "dma_device_type": 2 00:11:22.771 } 00:11:22.771 ], 00:11:22.771 "driver_specific": {} 00:11:22.771 } 00:11:22.771 ] 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.771 "name": "Existed_Raid", 00:11:22.771 "uuid": "b1e108cb-9784-484b-a0b9-2967a240959a", 00:11:22.771 "strip_size_kb": 64, 00:11:22.771 "state": "online", 00:11:22.771 "raid_level": "concat", 00:11:22.771 "superblock": false, 00:11:22.771 "num_base_bdevs": 4, 00:11:22.771 "num_base_bdevs_discovered": 4, 00:11:22.771 "num_base_bdevs_operational": 4, 00:11:22.771 "base_bdevs_list": [ 00:11:22.771 { 00:11:22.771 "name": "NewBaseBdev", 00:11:22.771 "uuid": "9f5bb740-479f-4fec-a6db-9080a198964f", 00:11:22.771 "is_configured": true, 00:11:22.771 "data_offset": 0, 00:11:22.771 "data_size": 65536 00:11:22.771 }, 00:11:22.771 { 00:11:22.771 "name": "BaseBdev2", 00:11:22.771 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:22.771 "is_configured": true, 00:11:22.771 "data_offset": 0, 00:11:22.771 "data_size": 65536 00:11:22.771 }, 00:11:22.771 { 00:11:22.771 "name": "BaseBdev3", 00:11:22.771 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:22.771 "is_configured": true, 00:11:22.771 "data_offset": 0, 00:11:22.771 "data_size": 65536 00:11:22.771 }, 00:11:22.771 { 00:11:22.771 "name": "BaseBdev4", 00:11:22.771 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:22.771 "is_configured": true, 00:11:22.771 "data_offset": 0, 00:11:22.771 "data_size": 65536 00:11:22.771 } 00:11:22.771 ] 00:11:22.771 }' 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.771 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.338 [2024-11-26 18:58:14.574096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.338 "name": "Existed_Raid", 00:11:23.338 "aliases": [ 00:11:23.338 "b1e108cb-9784-484b-a0b9-2967a240959a" 00:11:23.338 ], 00:11:23.338 "product_name": "Raid Volume", 00:11:23.338 "block_size": 512, 00:11:23.338 "num_blocks": 262144, 00:11:23.338 "uuid": "b1e108cb-9784-484b-a0b9-2967a240959a", 00:11:23.338 "assigned_rate_limits": { 00:11:23.338 "rw_ios_per_sec": 0, 00:11:23.338 "rw_mbytes_per_sec": 0, 00:11:23.338 "r_mbytes_per_sec": 0, 00:11:23.338 "w_mbytes_per_sec": 0 00:11:23.338 }, 00:11:23.338 "claimed": false, 00:11:23.338 "zoned": false, 00:11:23.338 "supported_io_types": { 00:11:23.338 "read": true, 00:11:23.338 "write": true, 00:11:23.338 "unmap": true, 00:11:23.338 "flush": true, 00:11:23.338 "reset": true, 00:11:23.338 "nvme_admin": false, 00:11:23.338 "nvme_io": false, 00:11:23.338 "nvme_io_md": false, 00:11:23.338 "write_zeroes": true, 00:11:23.338 "zcopy": false, 00:11:23.338 "get_zone_info": false, 00:11:23.338 "zone_management": false, 00:11:23.338 "zone_append": false, 00:11:23.338 "compare": false, 00:11:23.338 "compare_and_write": false, 00:11:23.338 "abort": false, 00:11:23.338 "seek_hole": false, 00:11:23.338 "seek_data": false, 00:11:23.338 "copy": false, 00:11:23.338 "nvme_iov_md": false 00:11:23.338 }, 00:11:23.338 "memory_domains": [ 00:11:23.338 { 00:11:23.338 "dma_device_id": "system", 00:11:23.338 "dma_device_type": 1 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.338 "dma_device_type": 2 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "dma_device_id": "system", 00:11:23.338 "dma_device_type": 1 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.338 "dma_device_type": 2 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "dma_device_id": "system", 00:11:23.338 "dma_device_type": 1 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.338 "dma_device_type": 2 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "dma_device_id": "system", 00:11:23.338 "dma_device_type": 1 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.338 "dma_device_type": 2 00:11:23.338 } 00:11:23.338 ], 00:11:23.338 "driver_specific": { 00:11:23.338 "raid": { 00:11:23.338 "uuid": "b1e108cb-9784-484b-a0b9-2967a240959a", 00:11:23.338 "strip_size_kb": 64, 00:11:23.338 "state": "online", 00:11:23.338 "raid_level": "concat", 00:11:23.338 "superblock": false, 00:11:23.338 "num_base_bdevs": 4, 00:11:23.338 "num_base_bdevs_discovered": 4, 00:11:23.338 "num_base_bdevs_operational": 4, 00:11:23.338 "base_bdevs_list": [ 00:11:23.338 { 00:11:23.338 "name": "NewBaseBdev", 00:11:23.338 "uuid": "9f5bb740-479f-4fec-a6db-9080a198964f", 00:11:23.338 "is_configured": true, 00:11:23.338 "data_offset": 0, 00:11:23.338 "data_size": 65536 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "name": "BaseBdev2", 00:11:23.338 "uuid": "f4469695-6466-4d03-b138-9d4c068030f1", 00:11:23.338 "is_configured": true, 00:11:23.338 "data_offset": 0, 00:11:23.338 "data_size": 65536 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "name": "BaseBdev3", 00:11:23.338 "uuid": "b6fab27c-c8e8-44d8-9613-a3c914fc0726", 00:11:23.338 "is_configured": true, 00:11:23.338 "data_offset": 0, 00:11:23.338 "data_size": 65536 00:11:23.338 }, 00:11:23.338 { 00:11:23.338 "name": "BaseBdev4", 00:11:23.338 "uuid": "9a988f0c-7b16-44bb-b087-20c27bb7d5a9", 00:11:23.338 "is_configured": true, 00:11:23.338 "data_offset": 0, 00:11:23.338 "data_size": 65536 00:11:23.338 } 00:11:23.338 ] 00:11:23.338 } 00:11:23.338 } 00:11:23.338 }' 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:23.338 BaseBdev2 00:11:23.338 BaseBdev3 00:11:23.338 BaseBdev4' 00:11:23.338 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.597 [2024-11-26 18:58:14.941762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.597 [2024-11-26 18:58:14.941807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.597 [2024-11-26 18:58:14.941930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.597 [2024-11-26 18:58:14.942030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.597 [2024-11-26 18:58:14.942047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71434 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71434 ']' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71434 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.597 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71434 00:11:23.855 killing process with pid 71434 00:11:23.855 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.855 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.855 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71434' 00:11:23.855 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71434 00:11:23.855 [2024-11-26 18:58:14.982093] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.855 18:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71434 00:11:24.134 [2024-11-26 18:58:15.380078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.508 ************************************ 00:11:25.508 END TEST raid_state_function_test 00:11:25.508 ************************************ 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.508 00:11:25.508 real 0m13.231s 00:11:25.508 user 0m21.906s 00:11:25.508 sys 0m1.879s 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.508 18:58:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:25.508 18:58:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.508 18:58:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.508 18:58:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.508 ************************************ 00:11:25.508 START TEST raid_state_function_test_sb 00:11:25.508 ************************************ 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:25.508 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72123 00:11:25.509 Process raid pid: 72123 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72123' 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72123 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72123 ']' 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.509 18:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.509 [2024-11-26 18:58:16.644931] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:11:25.509 [2024-11-26 18:58:16.645122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.509 [2024-11-26 18:58:16.837338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.766 [2024-11-26 18:58:16.996851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.025 [2024-11-26 18:58:17.232672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.025 [2024-11-26 18:58:17.232726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.596 [2024-11-26 18:58:17.673226] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.596 [2024-11-26 18:58:17.673312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.596 [2024-11-26 18:58:17.673331] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.596 [2024-11-26 18:58:17.673348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.596 [2024-11-26 18:58:17.673359] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.596 [2024-11-26 18:58:17.673374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.596 [2024-11-26 18:58:17.673383] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:26.596 [2024-11-26 18:58:17.673408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.596 "name": "Existed_Raid", 00:11:26.596 "uuid": "a2d9d6ef-b219-438c-a88a-9902863bcc22", 00:11:26.596 "strip_size_kb": 64, 00:11:26.596 "state": "configuring", 00:11:26.596 "raid_level": "concat", 00:11:26.596 "superblock": true, 00:11:26.596 "num_base_bdevs": 4, 00:11:26.596 "num_base_bdevs_discovered": 0, 00:11:26.596 "num_base_bdevs_operational": 4, 00:11:26.596 "base_bdevs_list": [ 00:11:26.596 { 00:11:26.596 "name": "BaseBdev1", 00:11:26.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.596 "is_configured": false, 00:11:26.596 "data_offset": 0, 00:11:26.596 "data_size": 0 00:11:26.596 }, 00:11:26.596 { 00:11:26.596 "name": "BaseBdev2", 00:11:26.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.596 "is_configured": false, 00:11:26.596 "data_offset": 0, 00:11:26.596 "data_size": 0 00:11:26.596 }, 00:11:26.596 { 00:11:26.596 "name": "BaseBdev3", 00:11:26.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.596 "is_configured": false, 00:11:26.596 "data_offset": 0, 00:11:26.596 "data_size": 0 00:11:26.596 }, 00:11:26.596 { 00:11:26.596 "name": "BaseBdev4", 00:11:26.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.596 "is_configured": false, 00:11:26.596 "data_offset": 0, 00:11:26.596 "data_size": 0 00:11:26.596 } 00:11:26.596 ] 00:11:26.596 }' 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.596 18:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.856 [2024-11-26 18:58:18.201372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.856 [2024-11-26 18:58:18.201437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.856 [2024-11-26 18:58:18.209376] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.856 [2024-11-26 18:58:18.209454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.856 [2024-11-26 18:58:18.209469] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.856 [2024-11-26 18:58:18.209484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.856 [2024-11-26 18:58:18.209494] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.856 [2024-11-26 18:58:18.209508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.856 [2024-11-26 18:58:18.209517] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:26.856 [2024-11-26 18:58:18.209530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.856 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 [2024-11-26 18:58:18.255159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.116 BaseBdev1 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 [ 00:11:27.116 { 00:11:27.116 "name": "BaseBdev1", 00:11:27.116 "aliases": [ 00:11:27.116 "ada7b5b0-952a-486b-9f71-65d75f1464e8" 00:11:27.116 ], 00:11:27.116 "product_name": "Malloc disk", 00:11:27.116 "block_size": 512, 00:11:27.116 "num_blocks": 65536, 00:11:27.116 "uuid": "ada7b5b0-952a-486b-9f71-65d75f1464e8", 00:11:27.116 "assigned_rate_limits": { 00:11:27.116 "rw_ios_per_sec": 0, 00:11:27.116 "rw_mbytes_per_sec": 0, 00:11:27.116 "r_mbytes_per_sec": 0, 00:11:27.116 "w_mbytes_per_sec": 0 00:11:27.116 }, 00:11:27.116 "claimed": true, 00:11:27.116 "claim_type": "exclusive_write", 00:11:27.116 "zoned": false, 00:11:27.116 "supported_io_types": { 00:11:27.116 "read": true, 00:11:27.116 "write": true, 00:11:27.116 "unmap": true, 00:11:27.116 "flush": true, 00:11:27.116 "reset": true, 00:11:27.116 "nvme_admin": false, 00:11:27.116 "nvme_io": false, 00:11:27.116 "nvme_io_md": false, 00:11:27.116 "write_zeroes": true, 00:11:27.116 "zcopy": true, 00:11:27.116 "get_zone_info": false, 00:11:27.116 "zone_management": false, 00:11:27.116 "zone_append": false, 00:11:27.116 "compare": false, 00:11:27.116 "compare_and_write": false, 00:11:27.116 "abort": true, 00:11:27.116 "seek_hole": false, 00:11:27.116 "seek_data": false, 00:11:27.116 "copy": true, 00:11:27.116 "nvme_iov_md": false 00:11:27.116 }, 00:11:27.116 "memory_domains": [ 00:11:27.116 { 00:11:27.116 "dma_device_id": "system", 00:11:27.116 "dma_device_type": 1 00:11:27.116 }, 00:11:27.116 { 00:11:27.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.116 "dma_device_type": 2 00:11:27.116 } 00:11:27.116 ], 00:11:27.116 "driver_specific": {} 00:11:27.116 } 00:11:27.116 ] 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.116 "name": "Existed_Raid", 00:11:27.116 "uuid": "67c50ee5-8a6c-48cc-a237-91a15ad41a50", 00:11:27.116 "strip_size_kb": 64, 00:11:27.116 "state": "configuring", 00:11:27.116 "raid_level": "concat", 00:11:27.116 "superblock": true, 00:11:27.116 "num_base_bdevs": 4, 00:11:27.116 "num_base_bdevs_discovered": 1, 00:11:27.116 "num_base_bdevs_operational": 4, 00:11:27.116 "base_bdevs_list": [ 00:11:27.116 { 00:11:27.116 "name": "BaseBdev1", 00:11:27.116 "uuid": "ada7b5b0-952a-486b-9f71-65d75f1464e8", 00:11:27.116 "is_configured": true, 00:11:27.116 "data_offset": 2048, 00:11:27.116 "data_size": 63488 00:11:27.116 }, 00:11:27.116 { 00:11:27.116 "name": "BaseBdev2", 00:11:27.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.116 "is_configured": false, 00:11:27.116 "data_offset": 0, 00:11:27.116 "data_size": 0 00:11:27.116 }, 00:11:27.116 { 00:11:27.116 "name": "BaseBdev3", 00:11:27.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.116 "is_configured": false, 00:11:27.116 "data_offset": 0, 00:11:27.116 "data_size": 0 00:11:27.116 }, 00:11:27.116 { 00:11:27.116 "name": "BaseBdev4", 00:11:27.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.116 "is_configured": false, 00:11:27.116 "data_offset": 0, 00:11:27.116 "data_size": 0 00:11:27.116 } 00:11:27.116 ] 00:11:27.116 }' 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.116 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 [2024-11-26 18:58:18.819437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.685 [2024-11-26 18:58:18.819505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 [2024-11-26 18:58:18.827460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.685 [2024-11-26 18:58:18.830028] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.685 [2024-11-26 18:58:18.830079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.685 [2024-11-26 18:58:18.830095] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.685 [2024-11-26 18:58:18.830113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.685 [2024-11-26 18:58:18.830123] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.685 [2024-11-26 18:58:18.830137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.685 "name": "Existed_Raid", 00:11:27.685 "uuid": "2612d001-9fe9-4619-bc66-c0bc16b0abb4", 00:11:27.685 "strip_size_kb": 64, 00:11:27.685 "state": "configuring", 00:11:27.685 "raid_level": "concat", 00:11:27.685 "superblock": true, 00:11:27.685 "num_base_bdevs": 4, 00:11:27.685 "num_base_bdevs_discovered": 1, 00:11:27.685 "num_base_bdevs_operational": 4, 00:11:27.685 "base_bdevs_list": [ 00:11:27.685 { 00:11:27.685 "name": "BaseBdev1", 00:11:27.685 "uuid": "ada7b5b0-952a-486b-9f71-65d75f1464e8", 00:11:27.685 "is_configured": true, 00:11:27.685 "data_offset": 2048, 00:11:27.685 "data_size": 63488 00:11:27.685 }, 00:11:27.685 { 00:11:27.685 "name": "BaseBdev2", 00:11:27.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.685 "is_configured": false, 00:11:27.685 "data_offset": 0, 00:11:27.685 "data_size": 0 00:11:27.685 }, 00:11:27.685 { 00:11:27.685 "name": "BaseBdev3", 00:11:27.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.685 "is_configured": false, 00:11:27.685 "data_offset": 0, 00:11:27.685 "data_size": 0 00:11:27.685 }, 00:11:27.685 { 00:11:27.685 "name": "BaseBdev4", 00:11:27.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.685 "is_configured": false, 00:11:27.685 "data_offset": 0, 00:11:27.685 "data_size": 0 00:11:27.685 } 00:11:27.685 ] 00:11:27.685 }' 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.685 18:58:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.259 [2024-11-26 18:58:19.427698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.259 BaseBdev2 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.259 [ 00:11:28.259 { 00:11:28.259 "name": "BaseBdev2", 00:11:28.259 "aliases": [ 00:11:28.259 "88c39432-4b66-447b-9449-a321c2c20f7c" 00:11:28.259 ], 00:11:28.259 "product_name": "Malloc disk", 00:11:28.259 "block_size": 512, 00:11:28.259 "num_blocks": 65536, 00:11:28.259 "uuid": "88c39432-4b66-447b-9449-a321c2c20f7c", 00:11:28.259 "assigned_rate_limits": { 00:11:28.259 "rw_ios_per_sec": 0, 00:11:28.259 "rw_mbytes_per_sec": 0, 00:11:28.259 "r_mbytes_per_sec": 0, 00:11:28.259 "w_mbytes_per_sec": 0 00:11:28.259 }, 00:11:28.259 "claimed": true, 00:11:28.259 "claim_type": "exclusive_write", 00:11:28.259 "zoned": false, 00:11:28.259 "supported_io_types": { 00:11:28.259 "read": true, 00:11:28.259 "write": true, 00:11:28.259 "unmap": true, 00:11:28.259 "flush": true, 00:11:28.259 "reset": true, 00:11:28.259 "nvme_admin": false, 00:11:28.259 "nvme_io": false, 00:11:28.259 "nvme_io_md": false, 00:11:28.259 "write_zeroes": true, 00:11:28.259 "zcopy": true, 00:11:28.259 "get_zone_info": false, 00:11:28.259 "zone_management": false, 00:11:28.259 "zone_append": false, 00:11:28.259 "compare": false, 00:11:28.259 "compare_and_write": false, 00:11:28.259 "abort": true, 00:11:28.259 "seek_hole": false, 00:11:28.259 "seek_data": false, 00:11:28.259 "copy": true, 00:11:28.259 "nvme_iov_md": false 00:11:28.259 }, 00:11:28.259 "memory_domains": [ 00:11:28.259 { 00:11:28.259 "dma_device_id": "system", 00:11:28.259 "dma_device_type": 1 00:11:28.259 }, 00:11:28.259 { 00:11:28.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.259 "dma_device_type": 2 00:11:28.259 } 00:11:28.259 ], 00:11:28.259 "driver_specific": {} 00:11:28.259 } 00:11:28.259 ] 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.259 "name": "Existed_Raid", 00:11:28.259 "uuid": "2612d001-9fe9-4619-bc66-c0bc16b0abb4", 00:11:28.259 "strip_size_kb": 64, 00:11:28.259 "state": "configuring", 00:11:28.259 "raid_level": "concat", 00:11:28.259 "superblock": true, 00:11:28.259 "num_base_bdevs": 4, 00:11:28.259 "num_base_bdevs_discovered": 2, 00:11:28.259 "num_base_bdevs_operational": 4, 00:11:28.259 "base_bdevs_list": [ 00:11:28.259 { 00:11:28.259 "name": "BaseBdev1", 00:11:28.259 "uuid": "ada7b5b0-952a-486b-9f71-65d75f1464e8", 00:11:28.259 "is_configured": true, 00:11:28.259 "data_offset": 2048, 00:11:28.259 "data_size": 63488 00:11:28.259 }, 00:11:28.259 { 00:11:28.259 "name": "BaseBdev2", 00:11:28.259 "uuid": "88c39432-4b66-447b-9449-a321c2c20f7c", 00:11:28.259 "is_configured": true, 00:11:28.259 "data_offset": 2048, 00:11:28.259 "data_size": 63488 00:11:28.259 }, 00:11:28.259 { 00:11:28.259 "name": "BaseBdev3", 00:11:28.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.259 "is_configured": false, 00:11:28.259 "data_offset": 0, 00:11:28.259 "data_size": 0 00:11:28.259 }, 00:11:28.259 { 00:11:28.259 "name": "BaseBdev4", 00:11:28.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.259 "is_configured": false, 00:11:28.259 "data_offset": 0, 00:11:28.259 "data_size": 0 00:11:28.259 } 00:11:28.259 ] 00:11:28.259 }' 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.259 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.827 18:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.827 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.827 18:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.827 [2024-11-26 18:58:20.025399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.827 BaseBdev3 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.827 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.828 [ 00:11:28.828 { 00:11:28.828 "name": "BaseBdev3", 00:11:28.828 "aliases": [ 00:11:28.828 "96f2c8a2-de1e-446a-afa4-7bab11c85623" 00:11:28.828 ], 00:11:28.828 "product_name": "Malloc disk", 00:11:28.828 "block_size": 512, 00:11:28.828 "num_blocks": 65536, 00:11:28.828 "uuid": "96f2c8a2-de1e-446a-afa4-7bab11c85623", 00:11:28.828 "assigned_rate_limits": { 00:11:28.828 "rw_ios_per_sec": 0, 00:11:28.828 "rw_mbytes_per_sec": 0, 00:11:28.828 "r_mbytes_per_sec": 0, 00:11:28.828 "w_mbytes_per_sec": 0 00:11:28.828 }, 00:11:28.828 "claimed": true, 00:11:28.828 "claim_type": "exclusive_write", 00:11:28.828 "zoned": false, 00:11:28.828 "supported_io_types": { 00:11:28.828 "read": true, 00:11:28.828 "write": true, 00:11:28.828 "unmap": true, 00:11:28.828 "flush": true, 00:11:28.828 "reset": true, 00:11:28.828 "nvme_admin": false, 00:11:28.828 "nvme_io": false, 00:11:28.828 "nvme_io_md": false, 00:11:28.828 "write_zeroes": true, 00:11:28.828 "zcopy": true, 00:11:28.828 "get_zone_info": false, 00:11:28.828 "zone_management": false, 00:11:28.828 "zone_append": false, 00:11:28.828 "compare": false, 00:11:28.828 "compare_and_write": false, 00:11:28.828 "abort": true, 00:11:28.828 "seek_hole": false, 00:11:28.828 "seek_data": false, 00:11:28.828 "copy": true, 00:11:28.828 "nvme_iov_md": false 00:11:28.828 }, 00:11:28.828 "memory_domains": [ 00:11:28.828 { 00:11:28.828 "dma_device_id": "system", 00:11:28.828 "dma_device_type": 1 00:11:28.828 }, 00:11:28.828 { 00:11:28.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.828 "dma_device_type": 2 00:11:28.828 } 00:11:28.828 ], 00:11:28.828 "driver_specific": {} 00:11:28.828 } 00:11:28.828 ] 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.828 "name": "Existed_Raid", 00:11:28.828 "uuid": "2612d001-9fe9-4619-bc66-c0bc16b0abb4", 00:11:28.828 "strip_size_kb": 64, 00:11:28.828 "state": "configuring", 00:11:28.828 "raid_level": "concat", 00:11:28.828 "superblock": true, 00:11:28.828 "num_base_bdevs": 4, 00:11:28.828 "num_base_bdevs_discovered": 3, 00:11:28.828 "num_base_bdevs_operational": 4, 00:11:28.828 "base_bdevs_list": [ 00:11:28.828 { 00:11:28.828 "name": "BaseBdev1", 00:11:28.828 "uuid": "ada7b5b0-952a-486b-9f71-65d75f1464e8", 00:11:28.828 "is_configured": true, 00:11:28.828 "data_offset": 2048, 00:11:28.828 "data_size": 63488 00:11:28.828 }, 00:11:28.828 { 00:11:28.828 "name": "BaseBdev2", 00:11:28.828 "uuid": "88c39432-4b66-447b-9449-a321c2c20f7c", 00:11:28.828 "is_configured": true, 00:11:28.828 "data_offset": 2048, 00:11:28.828 "data_size": 63488 00:11:28.828 }, 00:11:28.828 { 00:11:28.828 "name": "BaseBdev3", 00:11:28.828 "uuid": "96f2c8a2-de1e-446a-afa4-7bab11c85623", 00:11:28.828 "is_configured": true, 00:11:28.828 "data_offset": 2048, 00:11:28.828 "data_size": 63488 00:11:28.828 }, 00:11:28.828 { 00:11:28.828 "name": "BaseBdev4", 00:11:28.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.828 "is_configured": false, 00:11:28.828 "data_offset": 0, 00:11:28.828 "data_size": 0 00:11:28.828 } 00:11:28.828 ] 00:11:28.828 }' 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.828 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.396 [2024-11-26 18:58:20.589611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:29.396 [2024-11-26 18:58:20.590027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.396 [2024-11-26 18:58:20.590057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:29.396 [2024-11-26 18:58:20.590482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:29.396 BaseBdev4 00:11:29.396 [2024-11-26 18:58:20.590756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.396 [2024-11-26 18:58:20.590795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:29.396 [2024-11-26 18:58:20.591064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.396 [ 00:11:29.396 { 00:11:29.396 "name": "BaseBdev4", 00:11:29.396 "aliases": [ 00:11:29.396 "3559b022-b09c-425d-bfeb-34c28e52e6b5" 00:11:29.396 ], 00:11:29.396 "product_name": "Malloc disk", 00:11:29.396 "block_size": 512, 00:11:29.396 "num_blocks": 65536, 00:11:29.396 "uuid": "3559b022-b09c-425d-bfeb-34c28e52e6b5", 00:11:29.396 "assigned_rate_limits": { 00:11:29.396 "rw_ios_per_sec": 0, 00:11:29.396 "rw_mbytes_per_sec": 0, 00:11:29.396 "r_mbytes_per_sec": 0, 00:11:29.396 "w_mbytes_per_sec": 0 00:11:29.396 }, 00:11:29.396 "claimed": true, 00:11:29.396 "claim_type": "exclusive_write", 00:11:29.396 "zoned": false, 00:11:29.396 "supported_io_types": { 00:11:29.396 "read": true, 00:11:29.396 "write": true, 00:11:29.396 "unmap": true, 00:11:29.396 "flush": true, 00:11:29.396 "reset": true, 00:11:29.396 "nvme_admin": false, 00:11:29.396 "nvme_io": false, 00:11:29.396 "nvme_io_md": false, 00:11:29.396 "write_zeroes": true, 00:11:29.396 "zcopy": true, 00:11:29.396 "get_zone_info": false, 00:11:29.396 "zone_management": false, 00:11:29.396 "zone_append": false, 00:11:29.396 "compare": false, 00:11:29.396 "compare_and_write": false, 00:11:29.396 "abort": true, 00:11:29.396 "seek_hole": false, 00:11:29.396 "seek_data": false, 00:11:29.396 "copy": true, 00:11:29.396 "nvme_iov_md": false 00:11:29.396 }, 00:11:29.396 "memory_domains": [ 00:11:29.396 { 00:11:29.396 "dma_device_id": "system", 00:11:29.396 "dma_device_type": 1 00:11:29.396 }, 00:11:29.396 { 00:11:29.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.396 "dma_device_type": 2 00:11:29.396 } 00:11:29.396 ], 00:11:29.396 "driver_specific": {} 00:11:29.396 } 00:11:29.396 ] 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.396 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.396 "name": "Existed_Raid", 00:11:29.396 "uuid": "2612d001-9fe9-4619-bc66-c0bc16b0abb4", 00:11:29.396 "strip_size_kb": 64, 00:11:29.396 "state": "online", 00:11:29.396 "raid_level": "concat", 00:11:29.396 "superblock": true, 00:11:29.397 "num_base_bdevs": 4, 00:11:29.397 "num_base_bdevs_discovered": 4, 00:11:29.397 "num_base_bdevs_operational": 4, 00:11:29.397 "base_bdevs_list": [ 00:11:29.397 { 00:11:29.397 "name": "BaseBdev1", 00:11:29.397 "uuid": "ada7b5b0-952a-486b-9f71-65d75f1464e8", 00:11:29.397 "is_configured": true, 00:11:29.397 "data_offset": 2048, 00:11:29.397 "data_size": 63488 00:11:29.397 }, 00:11:29.397 { 00:11:29.397 "name": "BaseBdev2", 00:11:29.397 "uuid": "88c39432-4b66-447b-9449-a321c2c20f7c", 00:11:29.397 "is_configured": true, 00:11:29.397 "data_offset": 2048, 00:11:29.397 "data_size": 63488 00:11:29.397 }, 00:11:29.397 { 00:11:29.397 "name": "BaseBdev3", 00:11:29.397 "uuid": "96f2c8a2-de1e-446a-afa4-7bab11c85623", 00:11:29.397 "is_configured": true, 00:11:29.397 "data_offset": 2048, 00:11:29.397 "data_size": 63488 00:11:29.397 }, 00:11:29.397 { 00:11:29.397 "name": "BaseBdev4", 00:11:29.397 "uuid": "3559b022-b09c-425d-bfeb-34c28e52e6b5", 00:11:29.397 "is_configured": true, 00:11:29.397 "data_offset": 2048, 00:11:29.397 "data_size": 63488 00:11:29.397 } 00:11:29.397 ] 00:11:29.397 }' 00:11:29.397 18:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.397 18:58:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.965 [2024-11-26 18:58:21.150313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.965 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.965 "name": "Existed_Raid", 00:11:29.965 "aliases": [ 00:11:29.965 "2612d001-9fe9-4619-bc66-c0bc16b0abb4" 00:11:29.965 ], 00:11:29.965 "product_name": "Raid Volume", 00:11:29.965 "block_size": 512, 00:11:29.965 "num_blocks": 253952, 00:11:29.965 "uuid": "2612d001-9fe9-4619-bc66-c0bc16b0abb4", 00:11:29.965 "assigned_rate_limits": { 00:11:29.965 "rw_ios_per_sec": 0, 00:11:29.965 "rw_mbytes_per_sec": 0, 00:11:29.965 "r_mbytes_per_sec": 0, 00:11:29.965 "w_mbytes_per_sec": 0 00:11:29.965 }, 00:11:29.965 "claimed": false, 00:11:29.965 "zoned": false, 00:11:29.965 "supported_io_types": { 00:11:29.965 "read": true, 00:11:29.965 "write": true, 00:11:29.965 "unmap": true, 00:11:29.965 "flush": true, 00:11:29.965 "reset": true, 00:11:29.965 "nvme_admin": false, 00:11:29.965 "nvme_io": false, 00:11:29.965 "nvme_io_md": false, 00:11:29.965 "write_zeroes": true, 00:11:29.965 "zcopy": false, 00:11:29.965 "get_zone_info": false, 00:11:29.965 "zone_management": false, 00:11:29.965 "zone_append": false, 00:11:29.965 "compare": false, 00:11:29.965 "compare_and_write": false, 00:11:29.965 "abort": false, 00:11:29.965 "seek_hole": false, 00:11:29.965 "seek_data": false, 00:11:29.965 "copy": false, 00:11:29.965 "nvme_iov_md": false 00:11:29.965 }, 00:11:29.965 "memory_domains": [ 00:11:29.965 { 00:11:29.965 "dma_device_id": "system", 00:11:29.965 "dma_device_type": 1 00:11:29.965 }, 00:11:29.965 { 00:11:29.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.965 "dma_device_type": 2 00:11:29.965 }, 00:11:29.965 { 00:11:29.965 "dma_device_id": "system", 00:11:29.965 "dma_device_type": 1 00:11:29.965 }, 00:11:29.965 { 00:11:29.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.965 "dma_device_type": 2 00:11:29.965 }, 00:11:29.965 { 00:11:29.965 "dma_device_id": "system", 00:11:29.965 "dma_device_type": 1 00:11:29.965 }, 00:11:29.965 { 00:11:29.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.965 "dma_device_type": 2 00:11:29.965 }, 00:11:29.965 { 00:11:29.965 "dma_device_id": "system", 00:11:29.965 "dma_device_type": 1 00:11:29.965 }, 00:11:29.965 { 00:11:29.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.965 "dma_device_type": 2 00:11:29.965 } 00:11:29.965 ], 00:11:29.965 "driver_specific": { 00:11:29.965 "raid": { 00:11:29.965 "uuid": "2612d001-9fe9-4619-bc66-c0bc16b0abb4", 00:11:29.965 "strip_size_kb": 64, 00:11:29.965 "state": "online", 00:11:29.965 "raid_level": "concat", 00:11:29.966 "superblock": true, 00:11:29.966 "num_base_bdevs": 4, 00:11:29.966 "num_base_bdevs_discovered": 4, 00:11:29.966 "num_base_bdevs_operational": 4, 00:11:29.966 "base_bdevs_list": [ 00:11:29.966 { 00:11:29.966 "name": "BaseBdev1", 00:11:29.966 "uuid": "ada7b5b0-952a-486b-9f71-65d75f1464e8", 00:11:29.966 "is_configured": true, 00:11:29.966 "data_offset": 2048, 00:11:29.966 "data_size": 63488 00:11:29.966 }, 00:11:29.966 { 00:11:29.966 "name": "BaseBdev2", 00:11:29.966 "uuid": "88c39432-4b66-447b-9449-a321c2c20f7c", 00:11:29.966 "is_configured": true, 00:11:29.966 "data_offset": 2048, 00:11:29.966 "data_size": 63488 00:11:29.966 }, 00:11:29.966 { 00:11:29.966 "name": "BaseBdev3", 00:11:29.966 "uuid": "96f2c8a2-de1e-446a-afa4-7bab11c85623", 00:11:29.966 "is_configured": true, 00:11:29.966 "data_offset": 2048, 00:11:29.966 "data_size": 63488 00:11:29.966 }, 00:11:29.966 { 00:11:29.966 "name": "BaseBdev4", 00:11:29.966 "uuid": "3559b022-b09c-425d-bfeb-34c28e52e6b5", 00:11:29.966 "is_configured": true, 00:11:29.966 "data_offset": 2048, 00:11:29.966 "data_size": 63488 00:11:29.966 } 00:11:29.966 ] 00:11:29.966 } 00:11:29.966 } 00:11:29.966 }' 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:29.966 BaseBdev2 00:11:29.966 BaseBdev3 00:11:29.966 BaseBdev4' 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.966 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.225 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.226 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.226 [2024-11-26 18:58:21.534095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.226 [2024-11-26 18:58:21.534134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.226 [2024-11-26 18:58:21.534206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.485 "name": "Existed_Raid", 00:11:30.485 "uuid": "2612d001-9fe9-4619-bc66-c0bc16b0abb4", 00:11:30.485 "strip_size_kb": 64, 00:11:30.485 "state": "offline", 00:11:30.485 "raid_level": "concat", 00:11:30.485 "superblock": true, 00:11:30.485 "num_base_bdevs": 4, 00:11:30.485 "num_base_bdevs_discovered": 3, 00:11:30.485 "num_base_bdevs_operational": 3, 00:11:30.485 "base_bdevs_list": [ 00:11:30.485 { 00:11:30.485 "name": null, 00:11:30.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.485 "is_configured": false, 00:11:30.485 "data_offset": 0, 00:11:30.485 "data_size": 63488 00:11:30.485 }, 00:11:30.485 { 00:11:30.485 "name": "BaseBdev2", 00:11:30.485 "uuid": "88c39432-4b66-447b-9449-a321c2c20f7c", 00:11:30.485 "is_configured": true, 00:11:30.485 "data_offset": 2048, 00:11:30.485 "data_size": 63488 00:11:30.485 }, 00:11:30.485 { 00:11:30.485 "name": "BaseBdev3", 00:11:30.485 "uuid": "96f2c8a2-de1e-446a-afa4-7bab11c85623", 00:11:30.485 "is_configured": true, 00:11:30.485 "data_offset": 2048, 00:11:30.485 "data_size": 63488 00:11:30.485 }, 00:11:30.485 { 00:11:30.485 "name": "BaseBdev4", 00:11:30.485 "uuid": "3559b022-b09c-425d-bfeb-34c28e52e6b5", 00:11:30.485 "is_configured": true, 00:11:30.485 "data_offset": 2048, 00:11:30.485 "data_size": 63488 00:11:30.485 } 00:11:30.485 ] 00:11:30.485 }' 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.485 18:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.054 [2024-11-26 18:58:22.185181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.054 [2024-11-26 18:58:22.327783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.054 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.313 [2024-11-26 18:58:22.476554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:31.313 [2024-11-26 18:58:22.476762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.313 BaseBdev2 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.313 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.574 [ 00:11:31.574 { 00:11:31.574 "name": "BaseBdev2", 00:11:31.574 "aliases": [ 00:11:31.574 "cd323ded-b98b-4349-94a7-3b4a756ebb00" 00:11:31.574 ], 00:11:31.574 "product_name": "Malloc disk", 00:11:31.574 "block_size": 512, 00:11:31.574 "num_blocks": 65536, 00:11:31.574 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:31.574 "assigned_rate_limits": { 00:11:31.574 "rw_ios_per_sec": 0, 00:11:31.574 "rw_mbytes_per_sec": 0, 00:11:31.574 "r_mbytes_per_sec": 0, 00:11:31.574 "w_mbytes_per_sec": 0 00:11:31.574 }, 00:11:31.574 "claimed": false, 00:11:31.574 "zoned": false, 00:11:31.574 "supported_io_types": { 00:11:31.574 "read": true, 00:11:31.574 "write": true, 00:11:31.574 "unmap": true, 00:11:31.574 "flush": true, 00:11:31.574 "reset": true, 00:11:31.574 "nvme_admin": false, 00:11:31.574 "nvme_io": false, 00:11:31.574 "nvme_io_md": false, 00:11:31.574 "write_zeroes": true, 00:11:31.574 "zcopy": true, 00:11:31.574 "get_zone_info": false, 00:11:31.574 "zone_management": false, 00:11:31.574 "zone_append": false, 00:11:31.574 "compare": false, 00:11:31.574 "compare_and_write": false, 00:11:31.574 "abort": true, 00:11:31.574 "seek_hole": false, 00:11:31.574 "seek_data": false, 00:11:31.574 "copy": true, 00:11:31.574 "nvme_iov_md": false 00:11:31.574 }, 00:11:31.574 "memory_domains": [ 00:11:31.574 { 00:11:31.574 "dma_device_id": "system", 00:11:31.574 "dma_device_type": 1 00:11:31.574 }, 00:11:31.574 { 00:11:31.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.574 "dma_device_type": 2 00:11:31.574 } 00:11:31.574 ], 00:11:31.574 "driver_specific": {} 00:11:31.574 } 00:11:31.574 ] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.574 BaseBdev3 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.574 [ 00:11:31.574 { 00:11:31.574 "name": "BaseBdev3", 00:11:31.574 "aliases": [ 00:11:31.574 "f85316f4-acd8-462c-a144-c5d167237877" 00:11:31.574 ], 00:11:31.574 "product_name": "Malloc disk", 00:11:31.574 "block_size": 512, 00:11:31.574 "num_blocks": 65536, 00:11:31.574 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:31.574 "assigned_rate_limits": { 00:11:31.574 "rw_ios_per_sec": 0, 00:11:31.574 "rw_mbytes_per_sec": 0, 00:11:31.574 "r_mbytes_per_sec": 0, 00:11:31.574 "w_mbytes_per_sec": 0 00:11:31.574 }, 00:11:31.574 "claimed": false, 00:11:31.574 "zoned": false, 00:11:31.574 "supported_io_types": { 00:11:31.574 "read": true, 00:11:31.574 "write": true, 00:11:31.574 "unmap": true, 00:11:31.574 "flush": true, 00:11:31.574 "reset": true, 00:11:31.574 "nvme_admin": false, 00:11:31.574 "nvme_io": false, 00:11:31.574 "nvme_io_md": false, 00:11:31.574 "write_zeroes": true, 00:11:31.574 "zcopy": true, 00:11:31.574 "get_zone_info": false, 00:11:31.574 "zone_management": false, 00:11:31.574 "zone_append": false, 00:11:31.574 "compare": false, 00:11:31.574 "compare_and_write": false, 00:11:31.574 "abort": true, 00:11:31.574 "seek_hole": false, 00:11:31.574 "seek_data": false, 00:11:31.574 "copy": true, 00:11:31.574 "nvme_iov_md": false 00:11:31.574 }, 00:11:31.574 "memory_domains": [ 00:11:31.574 { 00:11:31.574 "dma_device_id": "system", 00:11:31.574 "dma_device_type": 1 00:11:31.574 }, 00:11:31.574 { 00:11:31.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.574 "dma_device_type": 2 00:11:31.574 } 00:11:31.574 ], 00:11:31.574 "driver_specific": {} 00:11:31.574 } 00:11:31.574 ] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.574 BaseBdev4 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.574 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.574 [ 00:11:31.574 { 00:11:31.574 "name": "BaseBdev4", 00:11:31.574 "aliases": [ 00:11:31.574 "9adf5127-3199-4c47-a84f-2de8358ee789" 00:11:31.574 ], 00:11:31.574 "product_name": "Malloc disk", 00:11:31.574 "block_size": 512, 00:11:31.574 "num_blocks": 65536, 00:11:31.574 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:31.574 "assigned_rate_limits": { 00:11:31.574 "rw_ios_per_sec": 0, 00:11:31.574 "rw_mbytes_per_sec": 0, 00:11:31.574 "r_mbytes_per_sec": 0, 00:11:31.574 "w_mbytes_per_sec": 0 00:11:31.574 }, 00:11:31.574 "claimed": false, 00:11:31.574 "zoned": false, 00:11:31.574 "supported_io_types": { 00:11:31.574 "read": true, 00:11:31.574 "write": true, 00:11:31.574 "unmap": true, 00:11:31.574 "flush": true, 00:11:31.574 "reset": true, 00:11:31.574 "nvme_admin": false, 00:11:31.574 "nvme_io": false, 00:11:31.574 "nvme_io_md": false, 00:11:31.574 "write_zeroes": true, 00:11:31.574 "zcopy": true, 00:11:31.574 "get_zone_info": false, 00:11:31.574 "zone_management": false, 00:11:31.574 "zone_append": false, 00:11:31.574 "compare": false, 00:11:31.574 "compare_and_write": false, 00:11:31.574 "abort": true, 00:11:31.574 "seek_hole": false, 00:11:31.574 "seek_data": false, 00:11:31.574 "copy": true, 00:11:31.575 "nvme_iov_md": false 00:11:31.575 }, 00:11:31.575 "memory_domains": [ 00:11:31.575 { 00:11:31.575 "dma_device_id": "system", 00:11:31.575 "dma_device_type": 1 00:11:31.575 }, 00:11:31.575 { 00:11:31.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.575 "dma_device_type": 2 00:11:31.575 } 00:11:31.575 ], 00:11:31.575 "driver_specific": {} 00:11:31.575 } 00:11:31.575 ] 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.575 [2024-11-26 18:58:22.856803] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.575 [2024-11-26 18:58:22.856867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.575 [2024-11-26 18:58:22.856958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.575 [2024-11-26 18:58:22.859555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.575 [2024-11-26 18:58:22.859822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.575 "name": "Existed_Raid", 00:11:31.575 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:31.575 "strip_size_kb": 64, 00:11:31.575 "state": "configuring", 00:11:31.575 "raid_level": "concat", 00:11:31.575 "superblock": true, 00:11:31.575 "num_base_bdevs": 4, 00:11:31.575 "num_base_bdevs_discovered": 3, 00:11:31.575 "num_base_bdevs_operational": 4, 00:11:31.575 "base_bdevs_list": [ 00:11:31.575 { 00:11:31.575 "name": "BaseBdev1", 00:11:31.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.575 "is_configured": false, 00:11:31.575 "data_offset": 0, 00:11:31.575 "data_size": 0 00:11:31.575 }, 00:11:31.575 { 00:11:31.575 "name": "BaseBdev2", 00:11:31.575 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:31.575 "is_configured": true, 00:11:31.575 "data_offset": 2048, 00:11:31.575 "data_size": 63488 00:11:31.575 }, 00:11:31.575 { 00:11:31.575 "name": "BaseBdev3", 00:11:31.575 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:31.575 "is_configured": true, 00:11:31.575 "data_offset": 2048, 00:11:31.575 "data_size": 63488 00:11:31.575 }, 00:11:31.575 { 00:11:31.575 "name": "BaseBdev4", 00:11:31.575 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:31.575 "is_configured": true, 00:11:31.575 "data_offset": 2048, 00:11:31.575 "data_size": 63488 00:11:31.575 } 00:11:31.575 ] 00:11:31.575 }' 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.575 18:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.144 [2024-11-26 18:58:23.392987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.144 "name": "Existed_Raid", 00:11:32.144 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:32.144 "strip_size_kb": 64, 00:11:32.144 "state": "configuring", 00:11:32.144 "raid_level": "concat", 00:11:32.144 "superblock": true, 00:11:32.144 "num_base_bdevs": 4, 00:11:32.144 "num_base_bdevs_discovered": 2, 00:11:32.144 "num_base_bdevs_operational": 4, 00:11:32.144 "base_bdevs_list": [ 00:11:32.144 { 00:11:32.144 "name": "BaseBdev1", 00:11:32.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.144 "is_configured": false, 00:11:32.144 "data_offset": 0, 00:11:32.144 "data_size": 0 00:11:32.144 }, 00:11:32.144 { 00:11:32.144 "name": null, 00:11:32.144 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:32.144 "is_configured": false, 00:11:32.144 "data_offset": 0, 00:11:32.144 "data_size": 63488 00:11:32.144 }, 00:11:32.144 { 00:11:32.144 "name": "BaseBdev3", 00:11:32.144 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:32.144 "is_configured": true, 00:11:32.144 "data_offset": 2048, 00:11:32.144 "data_size": 63488 00:11:32.144 }, 00:11:32.144 { 00:11:32.144 "name": "BaseBdev4", 00:11:32.144 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:32.144 "is_configured": true, 00:11:32.144 "data_offset": 2048, 00:11:32.144 "data_size": 63488 00:11:32.144 } 00:11:32.144 ] 00:11:32.144 }' 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.144 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.712 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.712 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.712 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.712 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.712 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.712 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:32.712 18:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.712 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.712 18:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.712 [2024-11-26 18:58:24.024483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.712 BaseBdev1 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.712 [ 00:11:32.712 { 00:11:32.712 "name": "BaseBdev1", 00:11:32.712 "aliases": [ 00:11:32.712 "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5" 00:11:32.712 ], 00:11:32.712 "product_name": "Malloc disk", 00:11:32.712 "block_size": 512, 00:11:32.712 "num_blocks": 65536, 00:11:32.712 "uuid": "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5", 00:11:32.712 "assigned_rate_limits": { 00:11:32.712 "rw_ios_per_sec": 0, 00:11:32.712 "rw_mbytes_per_sec": 0, 00:11:32.712 "r_mbytes_per_sec": 0, 00:11:32.712 "w_mbytes_per_sec": 0 00:11:32.712 }, 00:11:32.712 "claimed": true, 00:11:32.712 "claim_type": "exclusive_write", 00:11:32.712 "zoned": false, 00:11:32.712 "supported_io_types": { 00:11:32.712 "read": true, 00:11:32.712 "write": true, 00:11:32.712 "unmap": true, 00:11:32.712 "flush": true, 00:11:32.712 "reset": true, 00:11:32.712 "nvme_admin": false, 00:11:32.712 "nvme_io": false, 00:11:32.712 "nvme_io_md": false, 00:11:32.712 "write_zeroes": true, 00:11:32.712 "zcopy": true, 00:11:32.712 "get_zone_info": false, 00:11:32.712 "zone_management": false, 00:11:32.712 "zone_append": false, 00:11:32.712 "compare": false, 00:11:32.712 "compare_and_write": false, 00:11:32.712 "abort": true, 00:11:32.712 "seek_hole": false, 00:11:32.712 "seek_data": false, 00:11:32.712 "copy": true, 00:11:32.712 "nvme_iov_md": false 00:11:32.712 }, 00:11:32.712 "memory_domains": [ 00:11:32.712 { 00:11:32.712 "dma_device_id": "system", 00:11:32.712 "dma_device_type": 1 00:11:32.712 }, 00:11:32.712 { 00:11:32.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.712 "dma_device_type": 2 00:11:32.712 } 00:11:32.712 ], 00:11:32.712 "driver_specific": {} 00:11:32.712 } 00:11:32.712 ] 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.712 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.971 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.971 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.971 "name": "Existed_Raid", 00:11:32.971 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:32.971 "strip_size_kb": 64, 00:11:32.971 "state": "configuring", 00:11:32.971 "raid_level": "concat", 00:11:32.971 "superblock": true, 00:11:32.971 "num_base_bdevs": 4, 00:11:32.971 "num_base_bdevs_discovered": 3, 00:11:32.971 "num_base_bdevs_operational": 4, 00:11:32.971 "base_bdevs_list": [ 00:11:32.971 { 00:11:32.971 "name": "BaseBdev1", 00:11:32.971 "uuid": "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5", 00:11:32.971 "is_configured": true, 00:11:32.971 "data_offset": 2048, 00:11:32.971 "data_size": 63488 00:11:32.971 }, 00:11:32.971 { 00:11:32.971 "name": null, 00:11:32.971 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:32.971 "is_configured": false, 00:11:32.971 "data_offset": 0, 00:11:32.971 "data_size": 63488 00:11:32.971 }, 00:11:32.971 { 00:11:32.971 "name": "BaseBdev3", 00:11:32.971 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:32.971 "is_configured": true, 00:11:32.971 "data_offset": 2048, 00:11:32.971 "data_size": 63488 00:11:32.971 }, 00:11:32.971 { 00:11:32.971 "name": "BaseBdev4", 00:11:32.971 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:32.971 "is_configured": true, 00:11:32.971 "data_offset": 2048, 00:11:32.971 "data_size": 63488 00:11:32.971 } 00:11:32.971 ] 00:11:32.971 }' 00:11:32.971 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.971 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.229 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.229 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.229 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.229 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:33.229 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.489 [2024-11-26 18:58:24.616894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.489 "name": "Existed_Raid", 00:11:33.489 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:33.489 "strip_size_kb": 64, 00:11:33.489 "state": "configuring", 00:11:33.489 "raid_level": "concat", 00:11:33.489 "superblock": true, 00:11:33.489 "num_base_bdevs": 4, 00:11:33.489 "num_base_bdevs_discovered": 2, 00:11:33.489 "num_base_bdevs_operational": 4, 00:11:33.489 "base_bdevs_list": [ 00:11:33.489 { 00:11:33.489 "name": "BaseBdev1", 00:11:33.489 "uuid": "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5", 00:11:33.489 "is_configured": true, 00:11:33.489 "data_offset": 2048, 00:11:33.489 "data_size": 63488 00:11:33.489 }, 00:11:33.489 { 00:11:33.489 "name": null, 00:11:33.489 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:33.489 "is_configured": false, 00:11:33.489 "data_offset": 0, 00:11:33.489 "data_size": 63488 00:11:33.489 }, 00:11:33.489 { 00:11:33.489 "name": null, 00:11:33.489 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:33.489 "is_configured": false, 00:11:33.489 "data_offset": 0, 00:11:33.489 "data_size": 63488 00:11:33.489 }, 00:11:33.489 { 00:11:33.489 "name": "BaseBdev4", 00:11:33.489 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:33.489 "is_configured": true, 00:11:33.489 "data_offset": 2048, 00:11:33.489 "data_size": 63488 00:11:33.489 } 00:11:33.489 ] 00:11:33.489 }' 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.489 18:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.139 [2024-11-26 18:58:25.205023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.139 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.139 "name": "Existed_Raid", 00:11:34.139 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:34.139 "strip_size_kb": 64, 00:11:34.139 "state": "configuring", 00:11:34.139 "raid_level": "concat", 00:11:34.139 "superblock": true, 00:11:34.139 "num_base_bdevs": 4, 00:11:34.139 "num_base_bdevs_discovered": 3, 00:11:34.139 "num_base_bdevs_operational": 4, 00:11:34.139 "base_bdevs_list": [ 00:11:34.139 { 00:11:34.139 "name": "BaseBdev1", 00:11:34.140 "uuid": "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5", 00:11:34.140 "is_configured": true, 00:11:34.140 "data_offset": 2048, 00:11:34.140 "data_size": 63488 00:11:34.140 }, 00:11:34.140 { 00:11:34.140 "name": null, 00:11:34.140 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:34.140 "is_configured": false, 00:11:34.140 "data_offset": 0, 00:11:34.140 "data_size": 63488 00:11:34.140 }, 00:11:34.140 { 00:11:34.140 "name": "BaseBdev3", 00:11:34.140 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:34.140 "is_configured": true, 00:11:34.140 "data_offset": 2048, 00:11:34.140 "data_size": 63488 00:11:34.140 }, 00:11:34.140 { 00:11:34.140 "name": "BaseBdev4", 00:11:34.140 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:34.140 "is_configured": true, 00:11:34.140 "data_offset": 2048, 00:11:34.140 "data_size": 63488 00:11:34.140 } 00:11:34.140 ] 00:11:34.140 }' 00:11:34.140 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.140 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.406 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.406 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:34.406 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.406 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.406 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.665 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:34.665 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.665 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.665 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.665 [2024-11-26 18:58:25.793270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.665 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.665 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.665 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.666 "name": "Existed_Raid", 00:11:34.666 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:34.666 "strip_size_kb": 64, 00:11:34.666 "state": "configuring", 00:11:34.666 "raid_level": "concat", 00:11:34.666 "superblock": true, 00:11:34.666 "num_base_bdevs": 4, 00:11:34.666 "num_base_bdevs_discovered": 2, 00:11:34.666 "num_base_bdevs_operational": 4, 00:11:34.666 "base_bdevs_list": [ 00:11:34.666 { 00:11:34.666 "name": null, 00:11:34.666 "uuid": "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5", 00:11:34.666 "is_configured": false, 00:11:34.666 "data_offset": 0, 00:11:34.666 "data_size": 63488 00:11:34.666 }, 00:11:34.666 { 00:11:34.666 "name": null, 00:11:34.666 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:34.666 "is_configured": false, 00:11:34.666 "data_offset": 0, 00:11:34.666 "data_size": 63488 00:11:34.666 }, 00:11:34.666 { 00:11:34.666 "name": "BaseBdev3", 00:11:34.666 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:34.666 "is_configured": true, 00:11:34.666 "data_offset": 2048, 00:11:34.666 "data_size": 63488 00:11:34.666 }, 00:11:34.666 { 00:11:34.666 "name": "BaseBdev4", 00:11:34.666 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:34.666 "is_configured": true, 00:11:34.666 "data_offset": 2048, 00:11:34.666 "data_size": 63488 00:11:34.666 } 00:11:34.666 ] 00:11:34.666 }' 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.666 18:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.236 [2024-11-26 18:58:26.458933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.236 "name": "Existed_Raid", 00:11:35.236 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:35.236 "strip_size_kb": 64, 00:11:35.236 "state": "configuring", 00:11:35.236 "raid_level": "concat", 00:11:35.236 "superblock": true, 00:11:35.236 "num_base_bdevs": 4, 00:11:35.236 "num_base_bdevs_discovered": 3, 00:11:35.236 "num_base_bdevs_operational": 4, 00:11:35.236 "base_bdevs_list": [ 00:11:35.236 { 00:11:35.236 "name": null, 00:11:35.236 "uuid": "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5", 00:11:35.236 "is_configured": false, 00:11:35.236 "data_offset": 0, 00:11:35.236 "data_size": 63488 00:11:35.236 }, 00:11:35.236 { 00:11:35.236 "name": "BaseBdev2", 00:11:35.236 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:35.236 "is_configured": true, 00:11:35.236 "data_offset": 2048, 00:11:35.236 "data_size": 63488 00:11:35.236 }, 00:11:35.236 { 00:11:35.236 "name": "BaseBdev3", 00:11:35.236 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:35.236 "is_configured": true, 00:11:35.236 "data_offset": 2048, 00:11:35.236 "data_size": 63488 00:11:35.236 }, 00:11:35.236 { 00:11:35.236 "name": "BaseBdev4", 00:11:35.236 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:35.236 "is_configured": true, 00:11:35.236 "data_offset": 2048, 00:11:35.236 "data_size": 63488 00:11:35.236 } 00:11:35.236 ] 00:11:35.236 }' 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.236 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.804 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.804 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.804 18:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:35.804 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.804 18:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.804 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:35.804 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.804 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:35.804 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.804 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.804 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.804 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.805 [2024-11-26 18:58:27.126821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:35.805 [2024-11-26 18:58:27.127129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:35.805 [2024-11-26 18:58:27.127148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:35.805 NewBaseBdev 00:11:35.805 [2024-11-26 18:58:27.127499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:35.805 [2024-11-26 18:58:27.127670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:35.805 [2024-11-26 18:58:27.127691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:35.805 [2024-11-26 18:58:27.127848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.805 [ 00:11:35.805 { 00:11:35.805 "name": "NewBaseBdev", 00:11:35.805 "aliases": [ 00:11:35.805 "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5" 00:11:35.805 ], 00:11:35.805 "product_name": "Malloc disk", 00:11:35.805 "block_size": 512, 00:11:35.805 "num_blocks": 65536, 00:11:35.805 "uuid": "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5", 00:11:35.805 "assigned_rate_limits": { 00:11:35.805 "rw_ios_per_sec": 0, 00:11:35.805 "rw_mbytes_per_sec": 0, 00:11:35.805 "r_mbytes_per_sec": 0, 00:11:35.805 "w_mbytes_per_sec": 0 00:11:35.805 }, 00:11:35.805 "claimed": true, 00:11:35.805 "claim_type": "exclusive_write", 00:11:35.805 "zoned": false, 00:11:35.805 "supported_io_types": { 00:11:35.805 "read": true, 00:11:35.805 "write": true, 00:11:35.805 "unmap": true, 00:11:35.805 "flush": true, 00:11:35.805 "reset": true, 00:11:35.805 "nvme_admin": false, 00:11:35.805 "nvme_io": false, 00:11:35.805 "nvme_io_md": false, 00:11:35.805 "write_zeroes": true, 00:11:35.805 "zcopy": true, 00:11:35.805 "get_zone_info": false, 00:11:35.805 "zone_management": false, 00:11:35.805 "zone_append": false, 00:11:35.805 "compare": false, 00:11:35.805 "compare_and_write": false, 00:11:35.805 "abort": true, 00:11:35.805 "seek_hole": false, 00:11:35.805 "seek_data": false, 00:11:35.805 "copy": true, 00:11:35.805 "nvme_iov_md": false 00:11:35.805 }, 00:11:35.805 "memory_domains": [ 00:11:35.805 { 00:11:35.805 "dma_device_id": "system", 00:11:35.805 "dma_device_type": 1 00:11:35.805 }, 00:11:35.805 { 00:11:35.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.805 "dma_device_type": 2 00:11:35.805 } 00:11:35.805 ], 00:11:35.805 "driver_specific": {} 00:11:35.805 } 00:11:35.805 ] 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.805 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.065 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.065 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.065 "name": "Existed_Raid", 00:11:36.065 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:36.065 "strip_size_kb": 64, 00:11:36.065 "state": "online", 00:11:36.065 "raid_level": "concat", 00:11:36.065 "superblock": true, 00:11:36.065 "num_base_bdevs": 4, 00:11:36.065 "num_base_bdevs_discovered": 4, 00:11:36.065 "num_base_bdevs_operational": 4, 00:11:36.065 "base_bdevs_list": [ 00:11:36.065 { 00:11:36.065 "name": "NewBaseBdev", 00:11:36.065 "uuid": "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5", 00:11:36.065 "is_configured": true, 00:11:36.065 "data_offset": 2048, 00:11:36.065 "data_size": 63488 00:11:36.065 }, 00:11:36.065 { 00:11:36.065 "name": "BaseBdev2", 00:11:36.065 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:36.065 "is_configured": true, 00:11:36.065 "data_offset": 2048, 00:11:36.065 "data_size": 63488 00:11:36.065 }, 00:11:36.065 { 00:11:36.065 "name": "BaseBdev3", 00:11:36.065 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:36.065 "is_configured": true, 00:11:36.065 "data_offset": 2048, 00:11:36.065 "data_size": 63488 00:11:36.065 }, 00:11:36.065 { 00:11:36.065 "name": "BaseBdev4", 00:11:36.065 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:36.065 "is_configured": true, 00:11:36.065 "data_offset": 2048, 00:11:36.065 "data_size": 63488 00:11:36.065 } 00:11:36.065 ] 00:11:36.065 }' 00:11:36.065 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.065 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 [2024-11-26 18:58:27.671526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.584 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.584 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.584 "name": "Existed_Raid", 00:11:36.584 "aliases": [ 00:11:36.584 "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7" 00:11:36.584 ], 00:11:36.584 "product_name": "Raid Volume", 00:11:36.584 "block_size": 512, 00:11:36.584 "num_blocks": 253952, 00:11:36.584 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:36.584 "assigned_rate_limits": { 00:11:36.584 "rw_ios_per_sec": 0, 00:11:36.584 "rw_mbytes_per_sec": 0, 00:11:36.584 "r_mbytes_per_sec": 0, 00:11:36.584 "w_mbytes_per_sec": 0 00:11:36.584 }, 00:11:36.584 "claimed": false, 00:11:36.584 "zoned": false, 00:11:36.584 "supported_io_types": { 00:11:36.584 "read": true, 00:11:36.584 "write": true, 00:11:36.584 "unmap": true, 00:11:36.584 "flush": true, 00:11:36.584 "reset": true, 00:11:36.584 "nvme_admin": false, 00:11:36.584 "nvme_io": false, 00:11:36.584 "nvme_io_md": false, 00:11:36.584 "write_zeroes": true, 00:11:36.584 "zcopy": false, 00:11:36.584 "get_zone_info": false, 00:11:36.584 "zone_management": false, 00:11:36.584 "zone_append": false, 00:11:36.584 "compare": false, 00:11:36.584 "compare_and_write": false, 00:11:36.584 "abort": false, 00:11:36.584 "seek_hole": false, 00:11:36.584 "seek_data": false, 00:11:36.584 "copy": false, 00:11:36.584 "nvme_iov_md": false 00:11:36.584 }, 00:11:36.584 "memory_domains": [ 00:11:36.584 { 00:11:36.584 "dma_device_id": "system", 00:11:36.584 "dma_device_type": 1 00:11:36.584 }, 00:11:36.584 { 00:11:36.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.584 "dma_device_type": 2 00:11:36.584 }, 00:11:36.584 { 00:11:36.584 "dma_device_id": "system", 00:11:36.584 "dma_device_type": 1 00:11:36.584 }, 00:11:36.584 { 00:11:36.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.584 "dma_device_type": 2 00:11:36.584 }, 00:11:36.584 { 00:11:36.584 "dma_device_id": "system", 00:11:36.584 "dma_device_type": 1 00:11:36.584 }, 00:11:36.584 { 00:11:36.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.584 "dma_device_type": 2 00:11:36.584 }, 00:11:36.584 { 00:11:36.584 "dma_device_id": "system", 00:11:36.584 "dma_device_type": 1 00:11:36.584 }, 00:11:36.584 { 00:11:36.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.584 "dma_device_type": 2 00:11:36.584 } 00:11:36.584 ], 00:11:36.584 "driver_specific": { 00:11:36.584 "raid": { 00:11:36.584 "uuid": "985b15eb-29ca-4ac1-9bcc-22aaa5cd10c7", 00:11:36.584 "strip_size_kb": 64, 00:11:36.584 "state": "online", 00:11:36.584 "raid_level": "concat", 00:11:36.584 "superblock": true, 00:11:36.584 "num_base_bdevs": 4, 00:11:36.584 "num_base_bdevs_discovered": 4, 00:11:36.584 "num_base_bdevs_operational": 4, 00:11:36.584 "base_bdevs_list": [ 00:11:36.584 { 00:11:36.584 "name": "NewBaseBdev", 00:11:36.584 "uuid": "fb2ff33f-61d3-43f8-8b65-6790dbf4c7f5", 00:11:36.584 "is_configured": true, 00:11:36.584 "data_offset": 2048, 00:11:36.584 "data_size": 63488 00:11:36.584 }, 00:11:36.584 { 00:11:36.584 "name": "BaseBdev2", 00:11:36.585 "uuid": "cd323ded-b98b-4349-94a7-3b4a756ebb00", 00:11:36.585 "is_configured": true, 00:11:36.585 "data_offset": 2048, 00:11:36.585 "data_size": 63488 00:11:36.585 }, 00:11:36.585 { 00:11:36.585 "name": "BaseBdev3", 00:11:36.585 "uuid": "f85316f4-acd8-462c-a144-c5d167237877", 00:11:36.585 "is_configured": true, 00:11:36.585 "data_offset": 2048, 00:11:36.585 "data_size": 63488 00:11:36.585 }, 00:11:36.585 { 00:11:36.585 "name": "BaseBdev4", 00:11:36.585 "uuid": "9adf5127-3199-4c47-a84f-2de8358ee789", 00:11:36.585 "is_configured": true, 00:11:36.585 "data_offset": 2048, 00:11:36.585 "data_size": 63488 00:11:36.585 } 00:11:36.585 ] 00:11:36.585 } 00:11:36.585 } 00:11:36.585 }' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:36.585 BaseBdev2 00:11:36.585 BaseBdev3 00:11:36.585 BaseBdev4' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.585 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.844 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.844 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.844 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.844 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.844 18:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:36.844 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.844 18:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 [2024-11-26 18:58:28.051153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.844 [2024-11-26 18:58:28.051206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.844 [2024-11-26 18:58:28.051367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.844 [2024-11-26 18:58:28.051465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.844 [2024-11-26 18:58:28.051482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72123 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72123 ']' 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72123 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72123 00:11:36.844 killing process with pid 72123 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72123' 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72123 00:11:36.844 [2024-11-26 18:58:28.088238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.844 18:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72123 00:11:37.103 [2024-11-26 18:58:28.452592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.480 18:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:38.480 ************************************ 00:11:38.480 END TEST raid_state_function_test_sb 00:11:38.480 ************************************ 00:11:38.480 00:11:38.480 real 0m12.974s 00:11:38.480 user 0m21.481s 00:11:38.480 sys 0m1.890s 00:11:38.480 18:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.480 18:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.480 18:58:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:38.480 18:58:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.480 18:58:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.480 18:58:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.480 ************************************ 00:11:38.480 START TEST raid_superblock_test 00:11:38.480 ************************************ 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72809 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:38.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72809 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72809 ']' 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.480 18:58:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.480 [2024-11-26 18:58:29.668099] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:11:38.480 [2024-11-26 18:58:29.668553] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72809 ] 00:11:38.739 [2024-11-26 18:58:29.854075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.739 [2024-11-26 18:58:29.986098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.997 [2024-11-26 18:58:30.199966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.997 [2024-11-26 18:58:30.200287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.565 malloc1 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.565 [2024-11-26 18:58:30.734025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.565 [2024-11-26 18:58:30.734322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.565 [2024-11-26 18:58:30.734478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:39.565 [2024-11-26 18:58:30.734600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.565 [2024-11-26 18:58:30.737592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.565 pt1 00:11:39.565 [2024-11-26 18:58:30.737756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.565 malloc2 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.565 [2024-11-26 18:58:30.791852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.565 [2024-11-26 18:58:30.792060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.565 [2024-11-26 18:58:30.792140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:39.565 [2024-11-26 18:58:30.792314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.565 [2024-11-26 18:58:30.795367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.565 [2024-11-26 18:58:30.795522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.565 pt2 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.565 malloc3 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.565 [2024-11-26 18:58:30.863879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:39.565 [2024-11-26 18:58:30.863960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.565 [2024-11-26 18:58:30.863995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:39.565 [2024-11-26 18:58:30.864010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.565 [2024-11-26 18:58:30.866836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.565 [2024-11-26 18:58:30.867021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:39.565 pt3 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:39.565 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.566 malloc4 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.566 [2024-11-26 18:58:30.922090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:39.566 [2024-11-26 18:58:30.922334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.566 [2024-11-26 18:58:30.922411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:39.566 [2024-11-26 18:58:30.922654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.566 [2024-11-26 18:58:30.925646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.566 [2024-11-26 18:58:30.925807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:39.566 pt4 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.566 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.825 [2024-11-26 18:58:30.934190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:39.825 [2024-11-26 18:58:30.936742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.825 [2024-11-26 18:58:30.937022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:39.825 [2024-11-26 18:58:30.937106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:39.825 [2024-11-26 18:58:30.937397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:39.825 [2024-11-26 18:58:30.937445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:39.825 [2024-11-26 18:58:30.937802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:39.825 [2024-11-26 18:58:30.938047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:39.825 [2024-11-26 18:58:30.938070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:39.825 [2024-11-26 18:58:30.938364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.825 "name": "raid_bdev1", 00:11:39.825 "uuid": "3736051d-cd29-4a1c-b2ca-db9542bf1c5f", 00:11:39.825 "strip_size_kb": 64, 00:11:39.825 "state": "online", 00:11:39.825 "raid_level": "concat", 00:11:39.825 "superblock": true, 00:11:39.825 "num_base_bdevs": 4, 00:11:39.825 "num_base_bdevs_discovered": 4, 00:11:39.825 "num_base_bdevs_operational": 4, 00:11:39.825 "base_bdevs_list": [ 00:11:39.825 { 00:11:39.825 "name": "pt1", 00:11:39.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.825 "is_configured": true, 00:11:39.825 "data_offset": 2048, 00:11:39.825 "data_size": 63488 00:11:39.825 }, 00:11:39.825 { 00:11:39.825 "name": "pt2", 00:11:39.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.825 "is_configured": true, 00:11:39.825 "data_offset": 2048, 00:11:39.825 "data_size": 63488 00:11:39.825 }, 00:11:39.825 { 00:11:39.825 "name": "pt3", 00:11:39.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.825 "is_configured": true, 00:11:39.825 "data_offset": 2048, 00:11:39.825 "data_size": 63488 00:11:39.825 }, 00:11:39.825 { 00:11:39.825 "name": "pt4", 00:11:39.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.825 "is_configured": true, 00:11:39.825 "data_offset": 2048, 00:11:39.825 "data_size": 63488 00:11:39.825 } 00:11:39.825 ] 00:11:39.825 }' 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.825 18:58:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.392 [2024-11-26 18:58:31.474887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.392 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.392 "name": "raid_bdev1", 00:11:40.392 "aliases": [ 00:11:40.392 "3736051d-cd29-4a1c-b2ca-db9542bf1c5f" 00:11:40.392 ], 00:11:40.392 "product_name": "Raid Volume", 00:11:40.392 "block_size": 512, 00:11:40.392 "num_blocks": 253952, 00:11:40.392 "uuid": "3736051d-cd29-4a1c-b2ca-db9542bf1c5f", 00:11:40.392 "assigned_rate_limits": { 00:11:40.392 "rw_ios_per_sec": 0, 00:11:40.392 "rw_mbytes_per_sec": 0, 00:11:40.392 "r_mbytes_per_sec": 0, 00:11:40.392 "w_mbytes_per_sec": 0 00:11:40.392 }, 00:11:40.392 "claimed": false, 00:11:40.392 "zoned": false, 00:11:40.392 "supported_io_types": { 00:11:40.392 "read": true, 00:11:40.392 "write": true, 00:11:40.392 "unmap": true, 00:11:40.392 "flush": true, 00:11:40.392 "reset": true, 00:11:40.392 "nvme_admin": false, 00:11:40.392 "nvme_io": false, 00:11:40.392 "nvme_io_md": false, 00:11:40.392 "write_zeroes": true, 00:11:40.392 "zcopy": false, 00:11:40.392 "get_zone_info": false, 00:11:40.392 "zone_management": false, 00:11:40.392 "zone_append": false, 00:11:40.392 "compare": false, 00:11:40.392 "compare_and_write": false, 00:11:40.392 "abort": false, 00:11:40.392 "seek_hole": false, 00:11:40.392 "seek_data": false, 00:11:40.392 "copy": false, 00:11:40.392 "nvme_iov_md": false 00:11:40.392 }, 00:11:40.392 "memory_domains": [ 00:11:40.392 { 00:11:40.392 "dma_device_id": "system", 00:11:40.392 "dma_device_type": 1 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.392 "dma_device_type": 2 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "dma_device_id": "system", 00:11:40.392 "dma_device_type": 1 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.392 "dma_device_type": 2 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "dma_device_id": "system", 00:11:40.392 "dma_device_type": 1 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.392 "dma_device_type": 2 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "dma_device_id": "system", 00:11:40.392 "dma_device_type": 1 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.392 "dma_device_type": 2 00:11:40.392 } 00:11:40.392 ], 00:11:40.392 "driver_specific": { 00:11:40.392 "raid": { 00:11:40.392 "uuid": "3736051d-cd29-4a1c-b2ca-db9542bf1c5f", 00:11:40.392 "strip_size_kb": 64, 00:11:40.392 "state": "online", 00:11:40.392 "raid_level": "concat", 00:11:40.392 "superblock": true, 00:11:40.392 "num_base_bdevs": 4, 00:11:40.392 "num_base_bdevs_discovered": 4, 00:11:40.392 "num_base_bdevs_operational": 4, 00:11:40.392 "base_bdevs_list": [ 00:11:40.392 { 00:11:40.392 "name": "pt1", 00:11:40.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.392 "is_configured": true, 00:11:40.392 "data_offset": 2048, 00:11:40.392 "data_size": 63488 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "name": "pt2", 00:11:40.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.392 "is_configured": true, 00:11:40.392 "data_offset": 2048, 00:11:40.392 "data_size": 63488 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "name": "pt3", 00:11:40.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.392 "is_configured": true, 00:11:40.392 "data_offset": 2048, 00:11:40.392 "data_size": 63488 00:11:40.392 }, 00:11:40.392 { 00:11:40.392 "name": "pt4", 00:11:40.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.393 "is_configured": true, 00:11:40.393 "data_offset": 2048, 00:11:40.393 "data_size": 63488 00:11:40.393 } 00:11:40.393 ] 00:11:40.393 } 00:11:40.393 } 00:11:40.393 }' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:40.393 pt2 00:11:40.393 pt3 00:11:40.393 pt4' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.393 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 [2024-11-26 18:58:31.814994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3736051d-cd29-4a1c-b2ca-db9542bf1c5f 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3736051d-cd29-4a1c-b2ca-db9542bf1c5f ']' 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 [2024-11-26 18:58:31.862572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.652 [2024-11-26 18:58:31.862601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.652 [2024-11-26 18:58:31.862730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.652 [2024-11-26 18:58:31.862826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.652 [2024-11-26 18:58:31.862849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.652 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.653 18:58:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.653 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.911 [2024-11-26 18:58:32.018676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:40.911 [2024-11-26 18:58:32.021372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:40.911 [2024-11-26 18:58:32.021613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:40.911 [2024-11-26 18:58:32.021684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:40.911 [2024-11-26 18:58:32.021764] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:40.911 [2024-11-26 18:58:32.021838] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:40.911 [2024-11-26 18:58:32.021871] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:40.911 [2024-11-26 18:58:32.021924] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:40.911 [2024-11-26 18:58:32.021951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.911 [2024-11-26 18:58:32.021968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:40.911 request: 00:11:40.911 { 00:11:40.911 "name": "raid_bdev1", 00:11:40.911 "raid_level": "concat", 00:11:40.911 "base_bdevs": [ 00:11:40.911 "malloc1", 00:11:40.911 "malloc2", 00:11:40.911 "malloc3", 00:11:40.911 "malloc4" 00:11:40.911 ], 00:11:40.911 "strip_size_kb": 64, 00:11:40.911 "superblock": false, 00:11:40.911 "method": "bdev_raid_create", 00:11:40.911 "req_id": 1 00:11:40.911 } 00:11:40.911 Got JSON-RPC error response 00:11:40.911 response: 00:11:40.911 { 00:11:40.911 "code": -17, 00:11:40.911 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:40.911 } 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.911 [2024-11-26 18:58:32.082663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:40.911 [2024-11-26 18:58:32.082757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.911 [2024-11-26 18:58:32.082785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:40.911 [2024-11-26 18:58:32.082803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.911 [2024-11-26 18:58:32.085845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.911 [2024-11-26 18:58:32.085912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:40.911 [2024-11-26 18:58:32.086004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:40.911 [2024-11-26 18:58:32.086076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:40.911 pt1 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.911 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.911 "name": "raid_bdev1", 00:11:40.911 "uuid": "3736051d-cd29-4a1c-b2ca-db9542bf1c5f", 00:11:40.911 "strip_size_kb": 64, 00:11:40.911 "state": "configuring", 00:11:40.912 "raid_level": "concat", 00:11:40.912 "superblock": true, 00:11:40.912 "num_base_bdevs": 4, 00:11:40.912 "num_base_bdevs_discovered": 1, 00:11:40.912 "num_base_bdevs_operational": 4, 00:11:40.912 "base_bdevs_list": [ 00:11:40.912 { 00:11:40.912 "name": "pt1", 00:11:40.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.912 "is_configured": true, 00:11:40.912 "data_offset": 2048, 00:11:40.912 "data_size": 63488 00:11:40.912 }, 00:11:40.912 { 00:11:40.912 "name": null, 00:11:40.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.912 "is_configured": false, 00:11:40.912 "data_offset": 2048, 00:11:40.912 "data_size": 63488 00:11:40.912 }, 00:11:40.912 { 00:11:40.912 "name": null, 00:11:40.912 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.912 "is_configured": false, 00:11:40.912 "data_offset": 2048, 00:11:40.912 "data_size": 63488 00:11:40.912 }, 00:11:40.912 { 00:11:40.912 "name": null, 00:11:40.912 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.912 "is_configured": false, 00:11:40.912 "data_offset": 2048, 00:11:40.912 "data_size": 63488 00:11:40.912 } 00:11:40.912 ] 00:11:40.912 }' 00:11:40.912 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.912 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.478 [2024-11-26 18:58:32.602898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:41.478 [2024-11-26 18:58:32.603013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.478 [2024-11-26 18:58:32.603044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:41.478 [2024-11-26 18:58:32.603063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.478 [2024-11-26 18:58:32.603715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.478 [2024-11-26 18:58:32.603766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:41.478 [2024-11-26 18:58:32.603872] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:41.478 [2024-11-26 18:58:32.603936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.478 pt2 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.478 [2024-11-26 18:58:32.610843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.478 "name": "raid_bdev1", 00:11:41.478 "uuid": "3736051d-cd29-4a1c-b2ca-db9542bf1c5f", 00:11:41.478 "strip_size_kb": 64, 00:11:41.478 "state": "configuring", 00:11:41.478 "raid_level": "concat", 00:11:41.478 "superblock": true, 00:11:41.478 "num_base_bdevs": 4, 00:11:41.478 "num_base_bdevs_discovered": 1, 00:11:41.478 "num_base_bdevs_operational": 4, 00:11:41.478 "base_bdevs_list": [ 00:11:41.478 { 00:11:41.478 "name": "pt1", 00:11:41.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.478 "is_configured": true, 00:11:41.478 "data_offset": 2048, 00:11:41.478 "data_size": 63488 00:11:41.478 }, 00:11:41.478 { 00:11:41.478 "name": null, 00:11:41.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.478 "is_configured": false, 00:11:41.478 "data_offset": 0, 00:11:41.478 "data_size": 63488 00:11:41.478 }, 00:11:41.478 { 00:11:41.478 "name": null, 00:11:41.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.478 "is_configured": false, 00:11:41.478 "data_offset": 2048, 00:11:41.478 "data_size": 63488 00:11:41.478 }, 00:11:41.478 { 00:11:41.478 "name": null, 00:11:41.478 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.478 "is_configured": false, 00:11:41.478 "data_offset": 2048, 00:11:41.478 "data_size": 63488 00:11:41.478 } 00:11:41.478 ] 00:11:41.478 }' 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.478 18:58:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.049 [2024-11-26 18:58:33.159090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.049 [2024-11-26 18:58:33.159303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.049 [2024-11-26 18:58:33.159356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:42.049 [2024-11-26 18:58:33.159372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.049 [2024-11-26 18:58:33.159981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.049 [2024-11-26 18:58:33.160006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.049 [2024-11-26 18:58:33.160115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:42.049 [2024-11-26 18:58:33.160154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.049 pt2 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.049 [2024-11-26 18:58:33.167064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:42.049 [2024-11-26 18:58:33.167119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.049 [2024-11-26 18:58:33.167146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:42.049 [2024-11-26 18:58:33.167159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.049 [2024-11-26 18:58:33.167642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.049 [2024-11-26 18:58:33.167682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:42.049 [2024-11-26 18:58:33.167761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:42.049 [2024-11-26 18:58:33.167796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:42.049 pt3 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.049 [2024-11-26 18:58:33.175023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:42.049 [2024-11-26 18:58:33.175071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.049 [2024-11-26 18:58:33.175095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:42.049 [2024-11-26 18:58:33.175108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.049 [2024-11-26 18:58:33.175592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.049 [2024-11-26 18:58:33.175632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:42.049 [2024-11-26 18:58:33.175719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:42.049 [2024-11-26 18:58:33.175751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:42.049 [2024-11-26 18:58:33.175959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:42.049 [2024-11-26 18:58:33.175975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:42.049 [2024-11-26 18:58:33.176314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:42.049 [2024-11-26 18:58:33.176503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:42.049 [2024-11-26 18:58:33.176541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:42.049 [2024-11-26 18:58:33.176696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.049 pt4 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.049 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.049 "name": "raid_bdev1", 00:11:42.049 "uuid": "3736051d-cd29-4a1c-b2ca-db9542bf1c5f", 00:11:42.049 "strip_size_kb": 64, 00:11:42.049 "state": "online", 00:11:42.049 "raid_level": "concat", 00:11:42.049 "superblock": true, 00:11:42.049 "num_base_bdevs": 4, 00:11:42.049 "num_base_bdevs_discovered": 4, 00:11:42.049 "num_base_bdevs_operational": 4, 00:11:42.049 "base_bdevs_list": [ 00:11:42.049 { 00:11:42.049 "name": "pt1", 00:11:42.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.049 "is_configured": true, 00:11:42.049 "data_offset": 2048, 00:11:42.049 "data_size": 63488 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "name": "pt2", 00:11:42.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.049 "is_configured": true, 00:11:42.049 "data_offset": 2048, 00:11:42.049 "data_size": 63488 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "name": "pt3", 00:11:42.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.049 "is_configured": true, 00:11:42.049 "data_offset": 2048, 00:11:42.049 "data_size": 63488 00:11:42.049 }, 00:11:42.049 { 00:11:42.049 "name": "pt4", 00:11:42.049 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:42.049 "is_configured": true, 00:11:42.050 "data_offset": 2048, 00:11:42.050 "data_size": 63488 00:11:42.050 } 00:11:42.050 ] 00:11:42.050 }' 00:11:42.050 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.050 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.624 [2024-11-26 18:58:33.727667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.624 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.624 "name": "raid_bdev1", 00:11:42.624 "aliases": [ 00:11:42.624 "3736051d-cd29-4a1c-b2ca-db9542bf1c5f" 00:11:42.624 ], 00:11:42.624 "product_name": "Raid Volume", 00:11:42.624 "block_size": 512, 00:11:42.624 "num_blocks": 253952, 00:11:42.624 "uuid": "3736051d-cd29-4a1c-b2ca-db9542bf1c5f", 00:11:42.624 "assigned_rate_limits": { 00:11:42.624 "rw_ios_per_sec": 0, 00:11:42.624 "rw_mbytes_per_sec": 0, 00:11:42.624 "r_mbytes_per_sec": 0, 00:11:42.624 "w_mbytes_per_sec": 0 00:11:42.624 }, 00:11:42.624 "claimed": false, 00:11:42.624 "zoned": false, 00:11:42.624 "supported_io_types": { 00:11:42.624 "read": true, 00:11:42.624 "write": true, 00:11:42.624 "unmap": true, 00:11:42.624 "flush": true, 00:11:42.624 "reset": true, 00:11:42.624 "nvme_admin": false, 00:11:42.624 "nvme_io": false, 00:11:42.624 "nvme_io_md": false, 00:11:42.624 "write_zeroes": true, 00:11:42.624 "zcopy": false, 00:11:42.624 "get_zone_info": false, 00:11:42.624 "zone_management": false, 00:11:42.624 "zone_append": false, 00:11:42.624 "compare": false, 00:11:42.624 "compare_and_write": false, 00:11:42.624 "abort": false, 00:11:42.624 "seek_hole": false, 00:11:42.624 "seek_data": false, 00:11:42.624 "copy": false, 00:11:42.624 "nvme_iov_md": false 00:11:42.624 }, 00:11:42.624 "memory_domains": [ 00:11:42.624 { 00:11:42.624 "dma_device_id": "system", 00:11:42.624 "dma_device_type": 1 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.624 "dma_device_type": 2 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "dma_device_id": "system", 00:11:42.624 "dma_device_type": 1 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.624 "dma_device_type": 2 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "dma_device_id": "system", 00:11:42.624 "dma_device_type": 1 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.624 "dma_device_type": 2 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "dma_device_id": "system", 00:11:42.624 "dma_device_type": 1 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.624 "dma_device_type": 2 00:11:42.624 } 00:11:42.624 ], 00:11:42.624 "driver_specific": { 00:11:42.624 "raid": { 00:11:42.624 "uuid": "3736051d-cd29-4a1c-b2ca-db9542bf1c5f", 00:11:42.624 "strip_size_kb": 64, 00:11:42.624 "state": "online", 00:11:42.624 "raid_level": "concat", 00:11:42.624 "superblock": true, 00:11:42.624 "num_base_bdevs": 4, 00:11:42.624 "num_base_bdevs_discovered": 4, 00:11:42.624 "num_base_bdevs_operational": 4, 00:11:42.624 "base_bdevs_list": [ 00:11:42.624 { 00:11:42.624 "name": "pt1", 00:11:42.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.624 "is_configured": true, 00:11:42.624 "data_offset": 2048, 00:11:42.624 "data_size": 63488 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "name": "pt2", 00:11:42.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.624 "is_configured": true, 00:11:42.624 "data_offset": 2048, 00:11:42.624 "data_size": 63488 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "name": "pt3", 00:11:42.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.624 "is_configured": true, 00:11:42.624 "data_offset": 2048, 00:11:42.624 "data_size": 63488 00:11:42.624 }, 00:11:42.624 { 00:11:42.624 "name": "pt4", 00:11:42.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:42.624 "is_configured": true, 00:11:42.624 "data_offset": 2048, 00:11:42.624 "data_size": 63488 00:11:42.624 } 00:11:42.624 ] 00:11:42.625 } 00:11:42.625 } 00:11:42.625 }' 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:42.625 pt2 00:11:42.625 pt3 00:11:42.625 pt4' 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.625 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.884 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:42.884 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.884 18:58:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.884 18:58:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 [2024-11-26 18:58:34.111733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3736051d-cd29-4a1c-b2ca-db9542bf1c5f '!=' 3736051d-cd29-4a1c-b2ca-db9542bf1c5f ']' 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72809 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72809 ']' 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72809 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72809 00:11:42.884 killing process with pid 72809 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72809' 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72809 00:11:42.884 [2024-11-26 18:58:34.190606] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.884 18:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72809 00:11:42.884 [2024-11-26 18:58:34.190715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.884 [2024-11-26 18:58:34.190832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.884 [2024-11-26 18:58:34.190848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:43.452 [2024-11-26 18:58:34.565829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.388 18:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:44.388 00:11:44.388 real 0m6.071s 00:11:44.388 user 0m9.126s 00:11:44.388 sys 0m0.905s 00:11:44.388 18:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.388 18:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.388 ************************************ 00:11:44.388 END TEST raid_superblock_test 00:11:44.388 ************************************ 00:11:44.388 18:58:35 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:44.388 18:58:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.388 18:58:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.388 18:58:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.388 ************************************ 00:11:44.388 START TEST raid_read_error_test 00:11:44.388 ************************************ 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oJ0KGX57t0 00:11:44.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73076 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73076 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73076 ']' 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.388 18:58:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.647 [2024-11-26 18:58:35.790639] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:11:44.648 [2024-11-26 18:58:35.791072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73076 ] 00:11:44.648 [2024-11-26 18:58:35.965410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.906 [2024-11-26 18:58:36.096644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.165 [2024-11-26 18:58:36.304197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.165 [2024-11-26 18:58:36.304239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.733 BaseBdev1_malloc 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.733 true 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.733 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.733 [2024-11-26 18:58:36.858721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:45.733 [2024-11-26 18:58:36.858816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.734 [2024-11-26 18:58:36.858850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:45.734 [2024-11-26 18:58:36.858867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.734 [2024-11-26 18:58:36.862034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.734 [2024-11-26 18:58:36.862083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.734 BaseBdev1 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 BaseBdev2_malloc 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 true 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 [2024-11-26 18:58:36.916207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.734 [2024-11-26 18:58:36.916288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.734 [2024-11-26 18:58:36.916311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:45.734 [2024-11-26 18:58:36.916327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.734 [2024-11-26 18:58:36.919189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.734 [2024-11-26 18:58:36.919235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.734 BaseBdev2 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 BaseBdev3_malloc 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 true 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 [2024-11-26 18:58:36.982726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:45.734 [2024-11-26 18:58:36.982806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.734 [2024-11-26 18:58:36.982831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:45.734 [2024-11-26 18:58:36.982848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.734 [2024-11-26 18:58:36.985639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.734 [2024-11-26 18:58:36.985699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.734 BaseBdev3 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 BaseBdev4_malloc 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 true 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 [2024-11-26 18:58:37.042635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:45.734 [2024-11-26 18:58:37.042730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.734 [2024-11-26 18:58:37.042767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:45.734 [2024-11-26 18:58:37.042783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.734 [2024-11-26 18:58:37.045760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.734 [2024-11-26 18:58:37.045826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:45.734 BaseBdev4 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 [2024-11-26 18:58:37.054768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.734 [2024-11-26 18:58:37.057358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.734 [2024-11-26 18:58:37.057459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.734 [2024-11-26 18:58:37.057554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.734 [2024-11-26 18:58:37.057846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:45.734 [2024-11-26 18:58:37.057886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:45.734 [2024-11-26 18:58:37.058277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:45.734 [2024-11-26 18:58:37.058476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:45.734 [2024-11-26 18:58:37.058494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:45.734 [2024-11-26 18:58:37.058749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.734 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.994 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.994 "name": "raid_bdev1", 00:11:45.994 "uuid": "f50c5cd2-64bd-4830-8ba9-7b178598d02a", 00:11:45.994 "strip_size_kb": 64, 00:11:45.994 "state": "online", 00:11:45.994 "raid_level": "concat", 00:11:45.994 "superblock": true, 00:11:45.994 "num_base_bdevs": 4, 00:11:45.994 "num_base_bdevs_discovered": 4, 00:11:45.994 "num_base_bdevs_operational": 4, 00:11:45.994 "base_bdevs_list": [ 00:11:45.994 { 00:11:45.994 "name": "BaseBdev1", 00:11:45.994 "uuid": "14c5ac9c-a97b-5414-b676-5ec12ee0786f", 00:11:45.994 "is_configured": true, 00:11:45.994 "data_offset": 2048, 00:11:45.994 "data_size": 63488 00:11:45.994 }, 00:11:45.994 { 00:11:45.994 "name": "BaseBdev2", 00:11:45.994 "uuid": "e3669223-836e-5e8b-8fe5-79b935867903", 00:11:45.994 "is_configured": true, 00:11:45.994 "data_offset": 2048, 00:11:45.994 "data_size": 63488 00:11:45.994 }, 00:11:45.994 { 00:11:45.994 "name": "BaseBdev3", 00:11:45.994 "uuid": "720abd8b-1c80-5c0c-bfa9-be008663a4c3", 00:11:45.994 "is_configured": true, 00:11:45.994 "data_offset": 2048, 00:11:45.994 "data_size": 63488 00:11:45.994 }, 00:11:45.994 { 00:11:45.994 "name": "BaseBdev4", 00:11:45.994 "uuid": "7a33f315-bce5-568a-86cd-c15c9459b52c", 00:11:45.994 "is_configured": true, 00:11:45.994 "data_offset": 2048, 00:11:45.994 "data_size": 63488 00:11:45.994 } 00:11:45.994 ] 00:11:45.994 }' 00:11:45.994 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.994 18:58:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.564 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:46.564 18:58:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:46.564 [2024-11-26 18:58:37.752537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.507 "name": "raid_bdev1", 00:11:47.507 "uuid": "f50c5cd2-64bd-4830-8ba9-7b178598d02a", 00:11:47.507 "strip_size_kb": 64, 00:11:47.507 "state": "online", 00:11:47.507 "raid_level": "concat", 00:11:47.507 "superblock": true, 00:11:47.507 "num_base_bdevs": 4, 00:11:47.507 "num_base_bdevs_discovered": 4, 00:11:47.507 "num_base_bdevs_operational": 4, 00:11:47.507 "base_bdevs_list": [ 00:11:47.507 { 00:11:47.507 "name": "BaseBdev1", 00:11:47.507 "uuid": "14c5ac9c-a97b-5414-b676-5ec12ee0786f", 00:11:47.507 "is_configured": true, 00:11:47.507 "data_offset": 2048, 00:11:47.507 "data_size": 63488 00:11:47.507 }, 00:11:47.507 { 00:11:47.507 "name": "BaseBdev2", 00:11:47.507 "uuid": "e3669223-836e-5e8b-8fe5-79b935867903", 00:11:47.507 "is_configured": true, 00:11:47.507 "data_offset": 2048, 00:11:47.507 "data_size": 63488 00:11:47.507 }, 00:11:47.507 { 00:11:47.507 "name": "BaseBdev3", 00:11:47.507 "uuid": "720abd8b-1c80-5c0c-bfa9-be008663a4c3", 00:11:47.507 "is_configured": true, 00:11:47.507 "data_offset": 2048, 00:11:47.507 "data_size": 63488 00:11:47.507 }, 00:11:47.507 { 00:11:47.507 "name": "BaseBdev4", 00:11:47.507 "uuid": "7a33f315-bce5-568a-86cd-c15c9459b52c", 00:11:47.507 "is_configured": true, 00:11:47.507 "data_offset": 2048, 00:11:47.507 "data_size": 63488 00:11:47.507 } 00:11:47.507 ] 00:11:47.507 }' 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.507 18:58:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.074 [2024-11-26 18:58:39.164586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.074 [2024-11-26 18:58:39.164757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.074 [2024-11-26 18:58:39.168622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.074 [2024-11-26 18:58:39.168915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.074 { 00:11:48.074 "results": [ 00:11:48.074 { 00:11:48.074 "job": "raid_bdev1", 00:11:48.074 "core_mask": "0x1", 00:11:48.074 "workload": "randrw", 00:11:48.074 "percentage": 50, 00:11:48.074 "status": "finished", 00:11:48.074 "queue_depth": 1, 00:11:48.074 "io_size": 131072, 00:11:48.074 "runtime": 1.409383, 00:11:48.074 "iops": 10076.040366600137, 00:11:48.074 "mibps": 1259.505045825017, 00:11:48.074 "io_failed": 1, 00:11:48.074 "io_timeout": 0, 00:11:48.074 "avg_latency_us": 138.25030738308305, 00:11:48.074 "min_latency_us": 36.305454545454545, 00:11:48.074 "max_latency_us": 1876.7127272727273 00:11:48.074 } 00:11:48.074 ], 00:11:48.074 "core_count": 1 00:11:48.074 } 00:11:48.074 [2024-11-26 18:58:39.169117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.074 [2024-11-26 18:58:39.169152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73076 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73076 ']' 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73076 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73076 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73076' 00:11:48.074 killing process with pid 73076 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73076 00:11:48.074 [2024-11-26 18:58:39.223553] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.074 18:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73076 00:11:48.332 [2024-11-26 18:58:39.527815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.709 18:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oJ0KGX57t0 00:11:49.709 18:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:49.710 18:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:49.710 18:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:49.710 18:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:49.710 18:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.710 18:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:49.710 ************************************ 00:11:49.710 END TEST raid_read_error_test 00:11:49.710 ************************************ 00:11:49.710 18:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:49.710 00:11:49.710 real 0m4.982s 00:11:49.710 user 0m6.196s 00:11:49.710 sys 0m0.588s 00:11:49.710 18:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.710 18:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.710 18:58:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:49.710 18:58:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:49.710 18:58:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.710 18:58:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.710 ************************************ 00:11:49.710 START TEST raid_write_error_test 00:11:49.710 ************************************ 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0ZeW0NiBQ5 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73222 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73222 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73222 ']' 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.710 18:58:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.710 [2024-11-26 18:58:40.838017] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:11:49.710 [2024-11-26 18:58:40.838194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73222 ] 00:11:49.710 [2024-11-26 18:58:41.020281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.969 [2024-11-26 18:58:41.148424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.227 [2024-11-26 18:58:41.353873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.227 [2024-11-26 18:58:41.353975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.485 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.485 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:50.485 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.485 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.485 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.485 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.485 BaseBdev1_malloc 00:11:50.485 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.485 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:50.485 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.486 true 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.486 [2024-11-26 18:58:41.826412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:50.486 [2024-11-26 18:58:41.826493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.486 [2024-11-26 18:58:41.826522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:50.486 [2024-11-26 18:58:41.826540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.486 [2024-11-26 18:58:41.829425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.486 [2024-11-26 18:58:41.829630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.486 BaseBdev1 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.486 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 BaseBdev2_malloc 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 true 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 [2024-11-26 18:58:41.883110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:50.745 [2024-11-26 18:58:41.883178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.745 [2024-11-26 18:58:41.883204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:50.745 [2024-11-26 18:58:41.883223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.745 [2024-11-26 18:58:41.886156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.745 [2024-11-26 18:58:41.886339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:50.745 BaseBdev2 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 BaseBdev3_malloc 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 true 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 [2024-11-26 18:58:41.954944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:50.745 [2024-11-26 18:58:41.955014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.745 [2024-11-26 18:58:41.955042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:50.745 [2024-11-26 18:58:41.955063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.745 [2024-11-26 18:58:41.957921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.745 [2024-11-26 18:58:41.957962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:50.745 BaseBdev3 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 BaseBdev4_malloc 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 18:58:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 true 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 [2024-11-26 18:58:42.011662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:50.745 [2024-11-26 18:58:42.011730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.745 [2024-11-26 18:58:42.011758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:50.745 [2024-11-26 18:58:42.011777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.745 [2024-11-26 18:58:42.014649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.745 [2024-11-26 18:58:42.014714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:50.745 BaseBdev4 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.745 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 [2024-11-26 18:58:42.019747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.745 [2024-11-26 18:58:42.022229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.745 [2024-11-26 18:58:42.022335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.745 [2024-11-26 18:58:42.022431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.745 [2024-11-26 18:58:42.022767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:50.745 [2024-11-26 18:58:42.022790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:50.746 [2024-11-26 18:58:42.023125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:50.746 [2024-11-26 18:58:42.023352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:50.746 [2024-11-26 18:58:42.023395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:50.746 [2024-11-26 18:58:42.023630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.746 "name": "raid_bdev1", 00:11:50.746 "uuid": "e981ac44-61f4-4624-a99e-1d3fb2ce55db", 00:11:50.746 "strip_size_kb": 64, 00:11:50.746 "state": "online", 00:11:50.746 "raid_level": "concat", 00:11:50.746 "superblock": true, 00:11:50.746 "num_base_bdevs": 4, 00:11:50.746 "num_base_bdevs_discovered": 4, 00:11:50.746 "num_base_bdevs_operational": 4, 00:11:50.746 "base_bdevs_list": [ 00:11:50.746 { 00:11:50.746 "name": "BaseBdev1", 00:11:50.746 "uuid": "8093c9ec-ee41-5ba7-a84b-ffc589eeddd9", 00:11:50.746 "is_configured": true, 00:11:50.746 "data_offset": 2048, 00:11:50.746 "data_size": 63488 00:11:50.746 }, 00:11:50.746 { 00:11:50.746 "name": "BaseBdev2", 00:11:50.746 "uuid": "fc137708-76b4-5208-ae30-642dd4662f5b", 00:11:50.746 "is_configured": true, 00:11:50.746 "data_offset": 2048, 00:11:50.746 "data_size": 63488 00:11:50.746 }, 00:11:50.746 { 00:11:50.746 "name": "BaseBdev3", 00:11:50.746 "uuid": "e71808b8-3985-5801-9da8-b4e67969f379", 00:11:50.746 "is_configured": true, 00:11:50.746 "data_offset": 2048, 00:11:50.746 "data_size": 63488 00:11:50.746 }, 00:11:50.746 { 00:11:50.746 "name": "BaseBdev4", 00:11:50.746 "uuid": "ff2a2316-db26-5148-b1de-a80cce61601f", 00:11:50.746 "is_configured": true, 00:11:50.746 "data_offset": 2048, 00:11:50.746 "data_size": 63488 00:11:50.746 } 00:11:50.746 ] 00:11:50.746 }' 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.746 18:58:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.312 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.312 18:58:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.312 [2024-11-26 18:58:42.653406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.250 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.251 18:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.251 18:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.251 18:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.251 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.251 "name": "raid_bdev1", 00:11:52.251 "uuid": "e981ac44-61f4-4624-a99e-1d3fb2ce55db", 00:11:52.251 "strip_size_kb": 64, 00:11:52.251 "state": "online", 00:11:52.251 "raid_level": "concat", 00:11:52.251 "superblock": true, 00:11:52.251 "num_base_bdevs": 4, 00:11:52.251 "num_base_bdevs_discovered": 4, 00:11:52.251 "num_base_bdevs_operational": 4, 00:11:52.251 "base_bdevs_list": [ 00:11:52.251 { 00:11:52.251 "name": "BaseBdev1", 00:11:52.251 "uuid": "8093c9ec-ee41-5ba7-a84b-ffc589eeddd9", 00:11:52.251 "is_configured": true, 00:11:52.251 "data_offset": 2048, 00:11:52.251 "data_size": 63488 00:11:52.251 }, 00:11:52.251 { 00:11:52.251 "name": "BaseBdev2", 00:11:52.251 "uuid": "fc137708-76b4-5208-ae30-642dd4662f5b", 00:11:52.251 "is_configured": true, 00:11:52.251 "data_offset": 2048, 00:11:52.251 "data_size": 63488 00:11:52.251 }, 00:11:52.251 { 00:11:52.251 "name": "BaseBdev3", 00:11:52.251 "uuid": "e71808b8-3985-5801-9da8-b4e67969f379", 00:11:52.251 "is_configured": true, 00:11:52.251 "data_offset": 2048, 00:11:52.251 "data_size": 63488 00:11:52.251 }, 00:11:52.251 { 00:11:52.251 "name": "BaseBdev4", 00:11:52.251 "uuid": "ff2a2316-db26-5148-b1de-a80cce61601f", 00:11:52.251 "is_configured": true, 00:11:52.251 "data_offset": 2048, 00:11:52.251 "data_size": 63488 00:11:52.251 } 00:11:52.251 ] 00:11:52.251 }' 00:11:52.251 18:58:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.251 18:58:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 18:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.883 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.883 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 [2024-11-26 18:58:44.050621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.883 [2024-11-26 18:58:44.050660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.883 [2024-11-26 18:58:44.054279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.883 [2024-11-26 18:58:44.054544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.883 [2024-11-26 18:58:44.054738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.884 [2024-11-26 18:58:44.054912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:52.884 { 00:11:52.884 "results": [ 00:11:52.884 { 00:11:52.884 "job": "raid_bdev1", 00:11:52.884 "core_mask": "0x1", 00:11:52.884 "workload": "randrw", 00:11:52.884 "percentage": 50, 00:11:52.884 "status": "finished", 00:11:52.884 "queue_depth": 1, 00:11:52.884 "io_size": 131072, 00:11:52.884 "runtime": 1.394327, 00:11:52.884 "iops": 10259.429818112967, 00:11:52.884 "mibps": 1282.4287272641209, 00:11:52.884 "io_failed": 1, 00:11:52.884 "io_timeout": 0, 00:11:52.884 "avg_latency_us": 135.70569500400342, 00:11:52.884 "min_latency_us": 38.63272727272727, 00:11:52.884 "max_latency_us": 1817.1345454545456 00:11:52.884 } 00:11:52.884 ], 00:11:52.884 "core_count": 1 00:11:52.884 } 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73222 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73222 ']' 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73222 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73222 00:11:52.884 killing process with pid 73222 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73222' 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73222 00:11:52.884 [2024-11-26 18:58:44.091474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.884 18:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73222 00:11:53.157 [2024-11-26 18:58:44.390800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0ZeW0NiBQ5 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:54.537 ************************************ 00:11:54.537 END TEST raid_write_error_test 00:11:54.537 ************************************ 00:11:54.537 00:11:54.537 real 0m4.810s 00:11:54.537 user 0m5.893s 00:11:54.537 sys 0m0.567s 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.537 18:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.537 18:58:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:54.537 18:58:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:54.537 18:58:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:54.537 18:58:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.537 18:58:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.537 ************************************ 00:11:54.537 START TEST raid_state_function_test 00:11:54.537 ************************************ 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73365 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73365' 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:54.537 Process raid pid: 73365 00:11:54.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73365 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73365 ']' 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.537 18:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.537 [2024-11-26 18:58:45.677778] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:11:54.537 [2024-11-26 18:58:45.677968] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.537 [2024-11-26 18:58:45.855132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.796 [2024-11-26 18:58:45.989341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.055 [2024-11-26 18:58:46.195815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.055 [2024-11-26 18:58:46.195882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.313 18:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.313 18:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:55.313 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.313 18:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.313 18:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.313 [2024-11-26 18:58:46.667087] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.313 [2024-11-26 18:58:46.667153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.313 [2024-11-26 18:58:46.667171] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.313 [2024-11-26 18:58:46.667186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.313 [2024-11-26 18:58:46.667197] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.313 [2024-11-26 18:58:46.667213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.313 [2024-11-26 18:58:46.667223] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.313 [2024-11-26 18:58:46.667237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.313 18:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.313 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.314 18:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.573 18:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.573 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.573 "name": "Existed_Raid", 00:11:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.573 "strip_size_kb": 0, 00:11:55.573 "state": "configuring", 00:11:55.573 "raid_level": "raid1", 00:11:55.573 "superblock": false, 00:11:55.573 "num_base_bdevs": 4, 00:11:55.573 "num_base_bdevs_discovered": 0, 00:11:55.573 "num_base_bdevs_operational": 4, 00:11:55.573 "base_bdevs_list": [ 00:11:55.573 { 00:11:55.573 "name": "BaseBdev1", 00:11:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.573 "is_configured": false, 00:11:55.573 "data_offset": 0, 00:11:55.573 "data_size": 0 00:11:55.573 }, 00:11:55.573 { 00:11:55.573 "name": "BaseBdev2", 00:11:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.573 "is_configured": false, 00:11:55.573 "data_offset": 0, 00:11:55.573 "data_size": 0 00:11:55.573 }, 00:11:55.573 { 00:11:55.573 "name": "BaseBdev3", 00:11:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.573 "is_configured": false, 00:11:55.573 "data_offset": 0, 00:11:55.573 "data_size": 0 00:11:55.573 }, 00:11:55.573 { 00:11:55.573 "name": "BaseBdev4", 00:11:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.573 "is_configured": false, 00:11:55.573 "data_offset": 0, 00:11:55.573 "data_size": 0 00:11:55.573 } 00:11:55.573 ] 00:11:55.573 }' 00:11:55.573 18:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.573 18:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.832 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.832 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.832 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.832 [2024-11-26 18:58:47.195232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.832 [2024-11-26 18:58:47.195279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.091 [2024-11-26 18:58:47.203191] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.091 [2024-11-26 18:58:47.203244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.091 [2024-11-26 18:58:47.203261] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.091 [2024-11-26 18:58:47.203276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.091 [2024-11-26 18:58:47.203286] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.091 [2024-11-26 18:58:47.203300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.091 [2024-11-26 18:58:47.203309] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.091 [2024-11-26 18:58:47.203324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.091 [2024-11-26 18:58:47.249126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.091 BaseBdev1 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.091 [ 00:11:56.091 { 00:11:56.091 "name": "BaseBdev1", 00:11:56.091 "aliases": [ 00:11:56.091 "52f06604-0c82-41f5-ace2-91d06f79b955" 00:11:56.091 ], 00:11:56.091 "product_name": "Malloc disk", 00:11:56.091 "block_size": 512, 00:11:56.091 "num_blocks": 65536, 00:11:56.091 "uuid": "52f06604-0c82-41f5-ace2-91d06f79b955", 00:11:56.091 "assigned_rate_limits": { 00:11:56.091 "rw_ios_per_sec": 0, 00:11:56.091 "rw_mbytes_per_sec": 0, 00:11:56.091 "r_mbytes_per_sec": 0, 00:11:56.091 "w_mbytes_per_sec": 0 00:11:56.091 }, 00:11:56.091 "claimed": true, 00:11:56.091 "claim_type": "exclusive_write", 00:11:56.091 "zoned": false, 00:11:56.091 "supported_io_types": { 00:11:56.091 "read": true, 00:11:56.091 "write": true, 00:11:56.091 "unmap": true, 00:11:56.091 "flush": true, 00:11:56.091 "reset": true, 00:11:56.091 "nvme_admin": false, 00:11:56.091 "nvme_io": false, 00:11:56.091 "nvme_io_md": false, 00:11:56.091 "write_zeroes": true, 00:11:56.091 "zcopy": true, 00:11:56.091 "get_zone_info": false, 00:11:56.091 "zone_management": false, 00:11:56.091 "zone_append": false, 00:11:56.091 "compare": false, 00:11:56.091 "compare_and_write": false, 00:11:56.091 "abort": true, 00:11:56.091 "seek_hole": false, 00:11:56.091 "seek_data": false, 00:11:56.091 "copy": true, 00:11:56.091 "nvme_iov_md": false 00:11:56.091 }, 00:11:56.091 "memory_domains": [ 00:11:56.091 { 00:11:56.091 "dma_device_id": "system", 00:11:56.091 "dma_device_type": 1 00:11:56.091 }, 00:11:56.091 { 00:11:56.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.091 "dma_device_type": 2 00:11:56.091 } 00:11:56.091 ], 00:11:56.091 "driver_specific": {} 00:11:56.091 } 00:11:56.091 ] 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.091 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.091 "name": "Existed_Raid", 00:11:56.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.091 "strip_size_kb": 0, 00:11:56.091 "state": "configuring", 00:11:56.091 "raid_level": "raid1", 00:11:56.091 "superblock": false, 00:11:56.091 "num_base_bdevs": 4, 00:11:56.091 "num_base_bdevs_discovered": 1, 00:11:56.092 "num_base_bdevs_operational": 4, 00:11:56.092 "base_bdevs_list": [ 00:11:56.092 { 00:11:56.092 "name": "BaseBdev1", 00:11:56.092 "uuid": "52f06604-0c82-41f5-ace2-91d06f79b955", 00:11:56.092 "is_configured": true, 00:11:56.092 "data_offset": 0, 00:11:56.092 "data_size": 65536 00:11:56.092 }, 00:11:56.092 { 00:11:56.092 "name": "BaseBdev2", 00:11:56.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.092 "is_configured": false, 00:11:56.092 "data_offset": 0, 00:11:56.092 "data_size": 0 00:11:56.092 }, 00:11:56.092 { 00:11:56.092 "name": "BaseBdev3", 00:11:56.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.092 "is_configured": false, 00:11:56.092 "data_offset": 0, 00:11:56.092 "data_size": 0 00:11:56.092 }, 00:11:56.092 { 00:11:56.092 "name": "BaseBdev4", 00:11:56.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.092 "is_configured": false, 00:11:56.092 "data_offset": 0, 00:11:56.092 "data_size": 0 00:11:56.092 } 00:11:56.092 ] 00:11:56.092 }' 00:11:56.092 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.092 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.659 [2024-11-26 18:58:47.813318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.659 [2024-11-26 18:58:47.813382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.659 [2024-11-26 18:58:47.821353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.659 [2024-11-26 18:58:47.823836] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.659 [2024-11-26 18:58:47.823891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.659 [2024-11-26 18:58:47.823929] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.659 [2024-11-26 18:58:47.823948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.659 [2024-11-26 18:58:47.823958] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.659 [2024-11-26 18:58:47.823971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.659 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.659 "name": "Existed_Raid", 00:11:56.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.659 "strip_size_kb": 0, 00:11:56.659 "state": "configuring", 00:11:56.659 "raid_level": "raid1", 00:11:56.659 "superblock": false, 00:11:56.659 "num_base_bdevs": 4, 00:11:56.659 "num_base_bdevs_discovered": 1, 00:11:56.659 "num_base_bdevs_operational": 4, 00:11:56.659 "base_bdevs_list": [ 00:11:56.659 { 00:11:56.660 "name": "BaseBdev1", 00:11:56.660 "uuid": "52f06604-0c82-41f5-ace2-91d06f79b955", 00:11:56.660 "is_configured": true, 00:11:56.660 "data_offset": 0, 00:11:56.660 "data_size": 65536 00:11:56.660 }, 00:11:56.660 { 00:11:56.660 "name": "BaseBdev2", 00:11:56.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.660 "is_configured": false, 00:11:56.660 "data_offset": 0, 00:11:56.660 "data_size": 0 00:11:56.660 }, 00:11:56.660 { 00:11:56.660 "name": "BaseBdev3", 00:11:56.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.660 "is_configured": false, 00:11:56.660 "data_offset": 0, 00:11:56.660 "data_size": 0 00:11:56.660 }, 00:11:56.660 { 00:11:56.660 "name": "BaseBdev4", 00:11:56.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.660 "is_configured": false, 00:11:56.660 "data_offset": 0, 00:11:56.660 "data_size": 0 00:11:56.660 } 00:11:56.660 ] 00:11:56.660 }' 00:11:56.660 18:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.660 18:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.228 [2024-11-26 18:58:48.368628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.228 BaseBdev2 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.228 [ 00:11:57.228 { 00:11:57.228 "name": "BaseBdev2", 00:11:57.228 "aliases": [ 00:11:57.228 "1def9b13-5c5e-49c0-a008-47d02790d9a0" 00:11:57.228 ], 00:11:57.228 "product_name": "Malloc disk", 00:11:57.228 "block_size": 512, 00:11:57.228 "num_blocks": 65536, 00:11:57.228 "uuid": "1def9b13-5c5e-49c0-a008-47d02790d9a0", 00:11:57.228 "assigned_rate_limits": { 00:11:57.228 "rw_ios_per_sec": 0, 00:11:57.228 "rw_mbytes_per_sec": 0, 00:11:57.228 "r_mbytes_per_sec": 0, 00:11:57.228 "w_mbytes_per_sec": 0 00:11:57.228 }, 00:11:57.228 "claimed": true, 00:11:57.228 "claim_type": "exclusive_write", 00:11:57.228 "zoned": false, 00:11:57.228 "supported_io_types": { 00:11:57.228 "read": true, 00:11:57.228 "write": true, 00:11:57.228 "unmap": true, 00:11:57.228 "flush": true, 00:11:57.228 "reset": true, 00:11:57.228 "nvme_admin": false, 00:11:57.228 "nvme_io": false, 00:11:57.228 "nvme_io_md": false, 00:11:57.228 "write_zeroes": true, 00:11:57.228 "zcopy": true, 00:11:57.228 "get_zone_info": false, 00:11:57.228 "zone_management": false, 00:11:57.228 "zone_append": false, 00:11:57.228 "compare": false, 00:11:57.228 "compare_and_write": false, 00:11:57.228 "abort": true, 00:11:57.228 "seek_hole": false, 00:11:57.228 "seek_data": false, 00:11:57.228 "copy": true, 00:11:57.228 "nvme_iov_md": false 00:11:57.228 }, 00:11:57.228 "memory_domains": [ 00:11:57.228 { 00:11:57.228 "dma_device_id": "system", 00:11:57.228 "dma_device_type": 1 00:11:57.228 }, 00:11:57.228 { 00:11:57.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.228 "dma_device_type": 2 00:11:57.228 } 00:11:57.228 ], 00:11:57.228 "driver_specific": {} 00:11:57.228 } 00:11:57.228 ] 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.228 "name": "Existed_Raid", 00:11:57.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.228 "strip_size_kb": 0, 00:11:57.228 "state": "configuring", 00:11:57.228 "raid_level": "raid1", 00:11:57.228 "superblock": false, 00:11:57.228 "num_base_bdevs": 4, 00:11:57.228 "num_base_bdevs_discovered": 2, 00:11:57.228 "num_base_bdevs_operational": 4, 00:11:57.228 "base_bdevs_list": [ 00:11:57.228 { 00:11:57.228 "name": "BaseBdev1", 00:11:57.228 "uuid": "52f06604-0c82-41f5-ace2-91d06f79b955", 00:11:57.228 "is_configured": true, 00:11:57.228 "data_offset": 0, 00:11:57.228 "data_size": 65536 00:11:57.228 }, 00:11:57.228 { 00:11:57.228 "name": "BaseBdev2", 00:11:57.228 "uuid": "1def9b13-5c5e-49c0-a008-47d02790d9a0", 00:11:57.228 "is_configured": true, 00:11:57.228 "data_offset": 0, 00:11:57.228 "data_size": 65536 00:11:57.228 }, 00:11:57.228 { 00:11:57.228 "name": "BaseBdev3", 00:11:57.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.228 "is_configured": false, 00:11:57.228 "data_offset": 0, 00:11:57.228 "data_size": 0 00:11:57.228 }, 00:11:57.228 { 00:11:57.228 "name": "BaseBdev4", 00:11:57.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.228 "is_configured": false, 00:11:57.228 "data_offset": 0, 00:11:57.228 "data_size": 0 00:11:57.228 } 00:11:57.228 ] 00:11:57.228 }' 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.228 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.797 [2024-11-26 18:58:48.988903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.797 BaseBdev3 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.797 18:58:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.797 [ 00:11:57.797 { 00:11:57.797 "name": "BaseBdev3", 00:11:57.797 "aliases": [ 00:11:57.797 "bd5849a1-ae89-4123-bbd9-0d7a2903b7e0" 00:11:57.797 ], 00:11:57.797 "product_name": "Malloc disk", 00:11:57.797 "block_size": 512, 00:11:57.797 "num_blocks": 65536, 00:11:57.797 "uuid": "bd5849a1-ae89-4123-bbd9-0d7a2903b7e0", 00:11:57.797 "assigned_rate_limits": { 00:11:57.797 "rw_ios_per_sec": 0, 00:11:57.797 "rw_mbytes_per_sec": 0, 00:11:57.797 "r_mbytes_per_sec": 0, 00:11:57.797 "w_mbytes_per_sec": 0 00:11:57.797 }, 00:11:57.797 "claimed": true, 00:11:57.797 "claim_type": "exclusive_write", 00:11:57.797 "zoned": false, 00:11:57.797 "supported_io_types": { 00:11:57.797 "read": true, 00:11:57.797 "write": true, 00:11:57.797 "unmap": true, 00:11:57.797 "flush": true, 00:11:57.797 "reset": true, 00:11:57.797 "nvme_admin": false, 00:11:57.797 "nvme_io": false, 00:11:57.797 "nvme_io_md": false, 00:11:57.797 "write_zeroes": true, 00:11:57.797 "zcopy": true, 00:11:57.797 "get_zone_info": false, 00:11:57.797 "zone_management": false, 00:11:57.797 "zone_append": false, 00:11:57.797 "compare": false, 00:11:57.797 "compare_and_write": false, 00:11:57.797 "abort": true, 00:11:57.797 "seek_hole": false, 00:11:57.797 "seek_data": false, 00:11:57.797 "copy": true, 00:11:57.797 "nvme_iov_md": false 00:11:57.797 }, 00:11:57.797 "memory_domains": [ 00:11:57.797 { 00:11:57.797 "dma_device_id": "system", 00:11:57.797 "dma_device_type": 1 00:11:57.797 }, 00:11:57.797 { 00:11:57.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.797 "dma_device_type": 2 00:11:57.797 } 00:11:57.797 ], 00:11:57.797 "driver_specific": {} 00:11:57.797 } 00:11:57.797 ] 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.797 "name": "Existed_Raid", 00:11:57.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.797 "strip_size_kb": 0, 00:11:57.797 "state": "configuring", 00:11:57.797 "raid_level": "raid1", 00:11:57.797 "superblock": false, 00:11:57.797 "num_base_bdevs": 4, 00:11:57.797 "num_base_bdevs_discovered": 3, 00:11:57.797 "num_base_bdevs_operational": 4, 00:11:57.797 "base_bdevs_list": [ 00:11:57.797 { 00:11:57.797 "name": "BaseBdev1", 00:11:57.797 "uuid": "52f06604-0c82-41f5-ace2-91d06f79b955", 00:11:57.797 "is_configured": true, 00:11:57.797 "data_offset": 0, 00:11:57.797 "data_size": 65536 00:11:57.797 }, 00:11:57.797 { 00:11:57.797 "name": "BaseBdev2", 00:11:57.797 "uuid": "1def9b13-5c5e-49c0-a008-47d02790d9a0", 00:11:57.797 "is_configured": true, 00:11:57.797 "data_offset": 0, 00:11:57.797 "data_size": 65536 00:11:57.797 }, 00:11:57.797 { 00:11:57.797 "name": "BaseBdev3", 00:11:57.797 "uuid": "bd5849a1-ae89-4123-bbd9-0d7a2903b7e0", 00:11:57.797 "is_configured": true, 00:11:57.797 "data_offset": 0, 00:11:57.797 "data_size": 65536 00:11:57.797 }, 00:11:57.797 { 00:11:57.797 "name": "BaseBdev4", 00:11:57.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.797 "is_configured": false, 00:11:57.797 "data_offset": 0, 00:11:57.797 "data_size": 0 00:11:57.797 } 00:11:57.797 ] 00:11:57.797 }' 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.797 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.377 [2024-11-26 18:58:49.583061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.377 [2024-11-26 18:58:49.583432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.377 [2024-11-26 18:58:49.583457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:58.377 [2024-11-26 18:58:49.583830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:58.377 [2024-11-26 18:58:49.584157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.377 [2024-11-26 18:58:49.584180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:58.377 [2024-11-26 18:58:49.584532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.377 BaseBdev4 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.377 [ 00:11:58.377 { 00:11:58.377 "name": "BaseBdev4", 00:11:58.377 "aliases": [ 00:11:58.377 "a5dfc187-600d-4a80-9cd3-b15c9d848d58" 00:11:58.377 ], 00:11:58.377 "product_name": "Malloc disk", 00:11:58.377 "block_size": 512, 00:11:58.377 "num_blocks": 65536, 00:11:58.377 "uuid": "a5dfc187-600d-4a80-9cd3-b15c9d848d58", 00:11:58.377 "assigned_rate_limits": { 00:11:58.377 "rw_ios_per_sec": 0, 00:11:58.377 "rw_mbytes_per_sec": 0, 00:11:58.377 "r_mbytes_per_sec": 0, 00:11:58.377 "w_mbytes_per_sec": 0 00:11:58.377 }, 00:11:58.377 "claimed": true, 00:11:58.377 "claim_type": "exclusive_write", 00:11:58.377 "zoned": false, 00:11:58.377 "supported_io_types": { 00:11:58.377 "read": true, 00:11:58.377 "write": true, 00:11:58.377 "unmap": true, 00:11:58.377 "flush": true, 00:11:58.377 "reset": true, 00:11:58.377 "nvme_admin": false, 00:11:58.377 "nvme_io": false, 00:11:58.377 "nvme_io_md": false, 00:11:58.377 "write_zeroes": true, 00:11:58.377 "zcopy": true, 00:11:58.377 "get_zone_info": false, 00:11:58.377 "zone_management": false, 00:11:58.377 "zone_append": false, 00:11:58.377 "compare": false, 00:11:58.377 "compare_and_write": false, 00:11:58.377 "abort": true, 00:11:58.377 "seek_hole": false, 00:11:58.377 "seek_data": false, 00:11:58.377 "copy": true, 00:11:58.377 "nvme_iov_md": false 00:11:58.377 }, 00:11:58.377 "memory_domains": [ 00:11:58.377 { 00:11:58.377 "dma_device_id": "system", 00:11:58.377 "dma_device_type": 1 00:11:58.377 }, 00:11:58.377 { 00:11:58.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.377 "dma_device_type": 2 00:11:58.377 } 00:11:58.377 ], 00:11:58.377 "driver_specific": {} 00:11:58.377 } 00:11:58.377 ] 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.377 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.378 "name": "Existed_Raid", 00:11:58.378 "uuid": "4ff621cb-eb27-455e-a29f-f9753000e379", 00:11:58.378 "strip_size_kb": 0, 00:11:58.378 "state": "online", 00:11:58.378 "raid_level": "raid1", 00:11:58.378 "superblock": false, 00:11:58.378 "num_base_bdevs": 4, 00:11:58.378 "num_base_bdevs_discovered": 4, 00:11:58.378 "num_base_bdevs_operational": 4, 00:11:58.378 "base_bdevs_list": [ 00:11:58.378 { 00:11:58.378 "name": "BaseBdev1", 00:11:58.378 "uuid": "52f06604-0c82-41f5-ace2-91d06f79b955", 00:11:58.378 "is_configured": true, 00:11:58.378 "data_offset": 0, 00:11:58.378 "data_size": 65536 00:11:58.378 }, 00:11:58.378 { 00:11:58.378 "name": "BaseBdev2", 00:11:58.378 "uuid": "1def9b13-5c5e-49c0-a008-47d02790d9a0", 00:11:58.378 "is_configured": true, 00:11:58.378 "data_offset": 0, 00:11:58.378 "data_size": 65536 00:11:58.378 }, 00:11:58.378 { 00:11:58.378 "name": "BaseBdev3", 00:11:58.378 "uuid": "bd5849a1-ae89-4123-bbd9-0d7a2903b7e0", 00:11:58.378 "is_configured": true, 00:11:58.378 "data_offset": 0, 00:11:58.378 "data_size": 65536 00:11:58.378 }, 00:11:58.378 { 00:11:58.378 "name": "BaseBdev4", 00:11:58.378 "uuid": "a5dfc187-600d-4a80-9cd3-b15c9d848d58", 00:11:58.378 "is_configured": true, 00:11:58.378 "data_offset": 0, 00:11:58.378 "data_size": 65536 00:11:58.378 } 00:11:58.378 ] 00:11:58.378 }' 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.378 18:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.947 [2024-11-26 18:58:50.183770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:58.947 "name": "Existed_Raid", 00:11:58.947 "aliases": [ 00:11:58.947 "4ff621cb-eb27-455e-a29f-f9753000e379" 00:11:58.947 ], 00:11:58.947 "product_name": "Raid Volume", 00:11:58.947 "block_size": 512, 00:11:58.947 "num_blocks": 65536, 00:11:58.947 "uuid": "4ff621cb-eb27-455e-a29f-f9753000e379", 00:11:58.947 "assigned_rate_limits": { 00:11:58.947 "rw_ios_per_sec": 0, 00:11:58.947 "rw_mbytes_per_sec": 0, 00:11:58.947 "r_mbytes_per_sec": 0, 00:11:58.947 "w_mbytes_per_sec": 0 00:11:58.947 }, 00:11:58.947 "claimed": false, 00:11:58.947 "zoned": false, 00:11:58.947 "supported_io_types": { 00:11:58.947 "read": true, 00:11:58.947 "write": true, 00:11:58.947 "unmap": false, 00:11:58.947 "flush": false, 00:11:58.947 "reset": true, 00:11:58.947 "nvme_admin": false, 00:11:58.947 "nvme_io": false, 00:11:58.947 "nvme_io_md": false, 00:11:58.947 "write_zeroes": true, 00:11:58.947 "zcopy": false, 00:11:58.947 "get_zone_info": false, 00:11:58.947 "zone_management": false, 00:11:58.947 "zone_append": false, 00:11:58.947 "compare": false, 00:11:58.947 "compare_and_write": false, 00:11:58.947 "abort": false, 00:11:58.947 "seek_hole": false, 00:11:58.947 "seek_data": false, 00:11:58.947 "copy": false, 00:11:58.947 "nvme_iov_md": false 00:11:58.947 }, 00:11:58.947 "memory_domains": [ 00:11:58.947 { 00:11:58.947 "dma_device_id": "system", 00:11:58.947 "dma_device_type": 1 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.947 "dma_device_type": 2 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "dma_device_id": "system", 00:11:58.947 "dma_device_type": 1 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.947 "dma_device_type": 2 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "dma_device_id": "system", 00:11:58.947 "dma_device_type": 1 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.947 "dma_device_type": 2 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "dma_device_id": "system", 00:11:58.947 "dma_device_type": 1 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.947 "dma_device_type": 2 00:11:58.947 } 00:11:58.947 ], 00:11:58.947 "driver_specific": { 00:11:58.947 "raid": { 00:11:58.947 "uuid": "4ff621cb-eb27-455e-a29f-f9753000e379", 00:11:58.947 "strip_size_kb": 0, 00:11:58.947 "state": "online", 00:11:58.947 "raid_level": "raid1", 00:11:58.947 "superblock": false, 00:11:58.947 "num_base_bdevs": 4, 00:11:58.947 "num_base_bdevs_discovered": 4, 00:11:58.947 "num_base_bdevs_operational": 4, 00:11:58.947 "base_bdevs_list": [ 00:11:58.947 { 00:11:58.947 "name": "BaseBdev1", 00:11:58.947 "uuid": "52f06604-0c82-41f5-ace2-91d06f79b955", 00:11:58.947 "is_configured": true, 00:11:58.947 "data_offset": 0, 00:11:58.947 "data_size": 65536 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "name": "BaseBdev2", 00:11:58.947 "uuid": "1def9b13-5c5e-49c0-a008-47d02790d9a0", 00:11:58.947 "is_configured": true, 00:11:58.947 "data_offset": 0, 00:11:58.947 "data_size": 65536 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "name": "BaseBdev3", 00:11:58.947 "uuid": "bd5849a1-ae89-4123-bbd9-0d7a2903b7e0", 00:11:58.947 "is_configured": true, 00:11:58.947 "data_offset": 0, 00:11:58.947 "data_size": 65536 00:11:58.947 }, 00:11:58.947 { 00:11:58.947 "name": "BaseBdev4", 00:11:58.947 "uuid": "a5dfc187-600d-4a80-9cd3-b15c9d848d58", 00:11:58.947 "is_configured": true, 00:11:58.947 "data_offset": 0, 00:11:58.947 "data_size": 65536 00:11:58.947 } 00:11:58.947 ] 00:11:58.947 } 00:11:58.947 } 00:11:58.947 }' 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:58.947 BaseBdev2 00:11:58.947 BaseBdev3 00:11:58.947 BaseBdev4' 00:11:58.947 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.206 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.207 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.207 [2024-11-26 18:58:50.527531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.465 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.465 "name": "Existed_Raid", 00:11:59.465 "uuid": "4ff621cb-eb27-455e-a29f-f9753000e379", 00:11:59.465 "strip_size_kb": 0, 00:11:59.465 "state": "online", 00:11:59.466 "raid_level": "raid1", 00:11:59.466 "superblock": false, 00:11:59.466 "num_base_bdevs": 4, 00:11:59.466 "num_base_bdevs_discovered": 3, 00:11:59.466 "num_base_bdevs_operational": 3, 00:11:59.466 "base_bdevs_list": [ 00:11:59.466 { 00:11:59.466 "name": null, 00:11:59.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.466 "is_configured": false, 00:11:59.466 "data_offset": 0, 00:11:59.466 "data_size": 65536 00:11:59.466 }, 00:11:59.466 { 00:11:59.466 "name": "BaseBdev2", 00:11:59.466 "uuid": "1def9b13-5c5e-49c0-a008-47d02790d9a0", 00:11:59.466 "is_configured": true, 00:11:59.466 "data_offset": 0, 00:11:59.466 "data_size": 65536 00:11:59.466 }, 00:11:59.466 { 00:11:59.466 "name": "BaseBdev3", 00:11:59.466 "uuid": "bd5849a1-ae89-4123-bbd9-0d7a2903b7e0", 00:11:59.466 "is_configured": true, 00:11:59.466 "data_offset": 0, 00:11:59.466 "data_size": 65536 00:11:59.466 }, 00:11:59.466 { 00:11:59.466 "name": "BaseBdev4", 00:11:59.466 "uuid": "a5dfc187-600d-4a80-9cd3-b15c9d848d58", 00:11:59.466 "is_configured": true, 00:11:59.466 "data_offset": 0, 00:11:59.466 "data_size": 65536 00:11:59.466 } 00:11:59.466 ] 00:11:59.466 }' 00:11:59.466 18:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.466 18:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.033 [2024-11-26 18:58:51.215416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.033 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.034 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.034 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.034 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.034 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:00.034 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.034 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.034 [2024-11-26 18:58:51.364858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.292 [2024-11-26 18:58:51.513747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:00.292 [2024-11-26 18:58:51.513882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.292 [2024-11-26 18:58:51.602404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.292 [2024-11-26 18:58:51.602504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.292 [2024-11-26 18:58:51.602525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.292 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 BaseBdev2 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 [ 00:12:00.551 { 00:12:00.551 "name": "BaseBdev2", 00:12:00.551 "aliases": [ 00:12:00.551 "be48575c-4fe3-40f7-a716-4570d9149101" 00:12:00.551 ], 00:12:00.551 "product_name": "Malloc disk", 00:12:00.551 "block_size": 512, 00:12:00.551 "num_blocks": 65536, 00:12:00.551 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:00.551 "assigned_rate_limits": { 00:12:00.551 "rw_ios_per_sec": 0, 00:12:00.551 "rw_mbytes_per_sec": 0, 00:12:00.551 "r_mbytes_per_sec": 0, 00:12:00.551 "w_mbytes_per_sec": 0 00:12:00.551 }, 00:12:00.551 "claimed": false, 00:12:00.551 "zoned": false, 00:12:00.551 "supported_io_types": { 00:12:00.551 "read": true, 00:12:00.551 "write": true, 00:12:00.551 "unmap": true, 00:12:00.551 "flush": true, 00:12:00.551 "reset": true, 00:12:00.551 "nvme_admin": false, 00:12:00.551 "nvme_io": false, 00:12:00.551 "nvme_io_md": false, 00:12:00.551 "write_zeroes": true, 00:12:00.551 "zcopy": true, 00:12:00.551 "get_zone_info": false, 00:12:00.551 "zone_management": false, 00:12:00.551 "zone_append": false, 00:12:00.551 "compare": false, 00:12:00.551 "compare_and_write": false, 00:12:00.551 "abort": true, 00:12:00.551 "seek_hole": false, 00:12:00.551 "seek_data": false, 00:12:00.551 "copy": true, 00:12:00.551 "nvme_iov_md": false 00:12:00.551 }, 00:12:00.551 "memory_domains": [ 00:12:00.551 { 00:12:00.551 "dma_device_id": "system", 00:12:00.551 "dma_device_type": 1 00:12:00.551 }, 00:12:00.551 { 00:12:00.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.551 "dma_device_type": 2 00:12:00.551 } 00:12:00.551 ], 00:12:00.551 "driver_specific": {} 00:12:00.551 } 00:12:00.551 ] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 BaseBdev3 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 [ 00:12:00.551 { 00:12:00.551 "name": "BaseBdev3", 00:12:00.551 "aliases": [ 00:12:00.551 "a557f120-7355-47ba-bb5b-74644822219f" 00:12:00.551 ], 00:12:00.551 "product_name": "Malloc disk", 00:12:00.551 "block_size": 512, 00:12:00.551 "num_blocks": 65536, 00:12:00.551 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:00.551 "assigned_rate_limits": { 00:12:00.551 "rw_ios_per_sec": 0, 00:12:00.551 "rw_mbytes_per_sec": 0, 00:12:00.551 "r_mbytes_per_sec": 0, 00:12:00.551 "w_mbytes_per_sec": 0 00:12:00.551 }, 00:12:00.551 "claimed": false, 00:12:00.551 "zoned": false, 00:12:00.551 "supported_io_types": { 00:12:00.551 "read": true, 00:12:00.551 "write": true, 00:12:00.551 "unmap": true, 00:12:00.551 "flush": true, 00:12:00.551 "reset": true, 00:12:00.551 "nvme_admin": false, 00:12:00.551 "nvme_io": false, 00:12:00.551 "nvme_io_md": false, 00:12:00.551 "write_zeroes": true, 00:12:00.551 "zcopy": true, 00:12:00.551 "get_zone_info": false, 00:12:00.551 "zone_management": false, 00:12:00.551 "zone_append": false, 00:12:00.551 "compare": false, 00:12:00.551 "compare_and_write": false, 00:12:00.551 "abort": true, 00:12:00.551 "seek_hole": false, 00:12:00.551 "seek_data": false, 00:12:00.551 "copy": true, 00:12:00.551 "nvme_iov_md": false 00:12:00.551 }, 00:12:00.551 "memory_domains": [ 00:12:00.551 { 00:12:00.551 "dma_device_id": "system", 00:12:00.551 "dma_device_type": 1 00:12:00.551 }, 00:12:00.551 { 00:12:00.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.551 "dma_device_type": 2 00:12:00.551 } 00:12:00.551 ], 00:12:00.551 "driver_specific": {} 00:12:00.551 } 00:12:00.551 ] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 BaseBdev4 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 [ 00:12:00.551 { 00:12:00.551 "name": "BaseBdev4", 00:12:00.551 "aliases": [ 00:12:00.551 "43689d04-2f1d-463b-b417-ec4833bf4d0e" 00:12:00.551 ], 00:12:00.551 "product_name": "Malloc disk", 00:12:00.551 "block_size": 512, 00:12:00.551 "num_blocks": 65536, 00:12:00.551 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:00.551 "assigned_rate_limits": { 00:12:00.551 "rw_ios_per_sec": 0, 00:12:00.551 "rw_mbytes_per_sec": 0, 00:12:00.551 "r_mbytes_per_sec": 0, 00:12:00.551 "w_mbytes_per_sec": 0 00:12:00.551 }, 00:12:00.551 "claimed": false, 00:12:00.551 "zoned": false, 00:12:00.551 "supported_io_types": { 00:12:00.551 "read": true, 00:12:00.551 "write": true, 00:12:00.551 "unmap": true, 00:12:00.551 "flush": true, 00:12:00.551 "reset": true, 00:12:00.551 "nvme_admin": false, 00:12:00.551 "nvme_io": false, 00:12:00.551 "nvme_io_md": false, 00:12:00.551 "write_zeroes": true, 00:12:00.551 "zcopy": true, 00:12:00.551 "get_zone_info": false, 00:12:00.551 "zone_management": false, 00:12:00.551 "zone_append": false, 00:12:00.551 "compare": false, 00:12:00.551 "compare_and_write": false, 00:12:00.551 "abort": true, 00:12:00.551 "seek_hole": false, 00:12:00.551 "seek_data": false, 00:12:00.551 "copy": true, 00:12:00.551 "nvme_iov_md": false 00:12:00.551 }, 00:12:00.551 "memory_domains": [ 00:12:00.551 { 00:12:00.551 "dma_device_id": "system", 00:12:00.551 "dma_device_type": 1 00:12:00.551 }, 00:12:00.551 { 00:12:00.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.551 "dma_device_type": 2 00:12:00.551 } 00:12:00.551 ], 00:12:00.551 "driver_specific": {} 00:12:00.551 } 00:12:00.551 ] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.551 [2024-11-26 18:58:51.894248] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.551 [2024-11-26 18:58:51.894309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.551 [2024-11-26 18:58:51.894348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.551 [2024-11-26 18:58:51.896927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.551 [2024-11-26 18:58:51.897019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.551 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.810 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.810 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.810 "name": "Existed_Raid", 00:12:00.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.810 "strip_size_kb": 0, 00:12:00.810 "state": "configuring", 00:12:00.810 "raid_level": "raid1", 00:12:00.810 "superblock": false, 00:12:00.810 "num_base_bdevs": 4, 00:12:00.810 "num_base_bdevs_discovered": 3, 00:12:00.810 "num_base_bdevs_operational": 4, 00:12:00.810 "base_bdevs_list": [ 00:12:00.810 { 00:12:00.810 "name": "BaseBdev1", 00:12:00.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.810 "is_configured": false, 00:12:00.810 "data_offset": 0, 00:12:00.810 "data_size": 0 00:12:00.810 }, 00:12:00.810 { 00:12:00.810 "name": "BaseBdev2", 00:12:00.810 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:00.810 "is_configured": true, 00:12:00.810 "data_offset": 0, 00:12:00.810 "data_size": 65536 00:12:00.810 }, 00:12:00.810 { 00:12:00.810 "name": "BaseBdev3", 00:12:00.810 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:00.810 "is_configured": true, 00:12:00.810 "data_offset": 0, 00:12:00.810 "data_size": 65536 00:12:00.810 }, 00:12:00.810 { 00:12:00.810 "name": "BaseBdev4", 00:12:00.811 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:00.811 "is_configured": true, 00:12:00.811 "data_offset": 0, 00:12:00.811 "data_size": 65536 00:12:00.811 } 00:12:00.811 ] 00:12:00.811 }' 00:12:00.811 18:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.811 18:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.070 [2024-11-26 18:58:52.422381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.070 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.329 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.329 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.329 "name": "Existed_Raid", 00:12:01.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.329 "strip_size_kb": 0, 00:12:01.329 "state": "configuring", 00:12:01.329 "raid_level": "raid1", 00:12:01.329 "superblock": false, 00:12:01.329 "num_base_bdevs": 4, 00:12:01.329 "num_base_bdevs_discovered": 2, 00:12:01.329 "num_base_bdevs_operational": 4, 00:12:01.329 "base_bdevs_list": [ 00:12:01.329 { 00:12:01.329 "name": "BaseBdev1", 00:12:01.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.329 "is_configured": false, 00:12:01.329 "data_offset": 0, 00:12:01.329 "data_size": 0 00:12:01.329 }, 00:12:01.329 { 00:12:01.329 "name": null, 00:12:01.329 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:01.329 "is_configured": false, 00:12:01.329 "data_offset": 0, 00:12:01.329 "data_size": 65536 00:12:01.329 }, 00:12:01.329 { 00:12:01.329 "name": "BaseBdev3", 00:12:01.329 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:01.329 "is_configured": true, 00:12:01.329 "data_offset": 0, 00:12:01.329 "data_size": 65536 00:12:01.329 }, 00:12:01.329 { 00:12:01.329 "name": "BaseBdev4", 00:12:01.329 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:01.329 "is_configured": true, 00:12:01.329 "data_offset": 0, 00:12:01.329 "data_size": 65536 00:12:01.329 } 00:12:01.329 ] 00:12:01.329 }' 00:12:01.329 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.329 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.897 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.897 18:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.897 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.897 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.897 18:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.897 [2024-11-26 18:58:53.057748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.897 BaseBdev1 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.897 [ 00:12:01.897 { 00:12:01.897 "name": "BaseBdev1", 00:12:01.897 "aliases": [ 00:12:01.897 "2587d48f-6546-4aeb-929e-8c8ec0018af3" 00:12:01.897 ], 00:12:01.897 "product_name": "Malloc disk", 00:12:01.897 "block_size": 512, 00:12:01.897 "num_blocks": 65536, 00:12:01.897 "uuid": "2587d48f-6546-4aeb-929e-8c8ec0018af3", 00:12:01.897 "assigned_rate_limits": { 00:12:01.897 "rw_ios_per_sec": 0, 00:12:01.897 "rw_mbytes_per_sec": 0, 00:12:01.897 "r_mbytes_per_sec": 0, 00:12:01.897 "w_mbytes_per_sec": 0 00:12:01.897 }, 00:12:01.897 "claimed": true, 00:12:01.897 "claim_type": "exclusive_write", 00:12:01.897 "zoned": false, 00:12:01.897 "supported_io_types": { 00:12:01.897 "read": true, 00:12:01.897 "write": true, 00:12:01.897 "unmap": true, 00:12:01.897 "flush": true, 00:12:01.897 "reset": true, 00:12:01.897 "nvme_admin": false, 00:12:01.897 "nvme_io": false, 00:12:01.897 "nvme_io_md": false, 00:12:01.897 "write_zeroes": true, 00:12:01.897 "zcopy": true, 00:12:01.897 "get_zone_info": false, 00:12:01.897 "zone_management": false, 00:12:01.897 "zone_append": false, 00:12:01.897 "compare": false, 00:12:01.897 "compare_and_write": false, 00:12:01.897 "abort": true, 00:12:01.897 "seek_hole": false, 00:12:01.897 "seek_data": false, 00:12:01.897 "copy": true, 00:12:01.897 "nvme_iov_md": false 00:12:01.897 }, 00:12:01.897 "memory_domains": [ 00:12:01.897 { 00:12:01.897 "dma_device_id": "system", 00:12:01.897 "dma_device_type": 1 00:12:01.897 }, 00:12:01.897 { 00:12:01.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.897 "dma_device_type": 2 00:12:01.897 } 00:12:01.897 ], 00:12:01.897 "driver_specific": {} 00:12:01.897 } 00:12:01.897 ] 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.897 "name": "Existed_Raid", 00:12:01.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.897 "strip_size_kb": 0, 00:12:01.897 "state": "configuring", 00:12:01.897 "raid_level": "raid1", 00:12:01.897 "superblock": false, 00:12:01.897 "num_base_bdevs": 4, 00:12:01.897 "num_base_bdevs_discovered": 3, 00:12:01.897 "num_base_bdevs_operational": 4, 00:12:01.897 "base_bdevs_list": [ 00:12:01.897 { 00:12:01.897 "name": "BaseBdev1", 00:12:01.897 "uuid": "2587d48f-6546-4aeb-929e-8c8ec0018af3", 00:12:01.897 "is_configured": true, 00:12:01.897 "data_offset": 0, 00:12:01.897 "data_size": 65536 00:12:01.897 }, 00:12:01.897 { 00:12:01.897 "name": null, 00:12:01.897 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:01.897 "is_configured": false, 00:12:01.897 "data_offset": 0, 00:12:01.897 "data_size": 65536 00:12:01.897 }, 00:12:01.897 { 00:12:01.897 "name": "BaseBdev3", 00:12:01.897 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:01.897 "is_configured": true, 00:12:01.897 "data_offset": 0, 00:12:01.897 "data_size": 65536 00:12:01.897 }, 00:12:01.897 { 00:12:01.897 "name": "BaseBdev4", 00:12:01.897 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:01.897 "is_configured": true, 00:12:01.897 "data_offset": 0, 00:12:01.897 "data_size": 65536 00:12:01.897 } 00:12:01.897 ] 00:12:01.897 }' 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.897 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.466 [2024-11-26 18:58:53.706097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.466 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.466 "name": "Existed_Raid", 00:12:02.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.466 "strip_size_kb": 0, 00:12:02.466 "state": "configuring", 00:12:02.466 "raid_level": "raid1", 00:12:02.466 "superblock": false, 00:12:02.466 "num_base_bdevs": 4, 00:12:02.466 "num_base_bdevs_discovered": 2, 00:12:02.466 "num_base_bdevs_operational": 4, 00:12:02.466 "base_bdevs_list": [ 00:12:02.466 { 00:12:02.466 "name": "BaseBdev1", 00:12:02.466 "uuid": "2587d48f-6546-4aeb-929e-8c8ec0018af3", 00:12:02.466 "is_configured": true, 00:12:02.466 "data_offset": 0, 00:12:02.467 "data_size": 65536 00:12:02.467 }, 00:12:02.467 { 00:12:02.467 "name": null, 00:12:02.467 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:02.467 "is_configured": false, 00:12:02.467 "data_offset": 0, 00:12:02.467 "data_size": 65536 00:12:02.467 }, 00:12:02.467 { 00:12:02.467 "name": null, 00:12:02.467 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:02.467 "is_configured": false, 00:12:02.467 "data_offset": 0, 00:12:02.467 "data_size": 65536 00:12:02.467 }, 00:12:02.467 { 00:12:02.467 "name": "BaseBdev4", 00:12:02.467 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:02.467 "is_configured": true, 00:12:02.467 "data_offset": 0, 00:12:02.467 "data_size": 65536 00:12:02.467 } 00:12:02.467 ] 00:12:02.467 }' 00:12:02.467 18:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.467 18:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.035 [2024-11-26 18:58:54.310279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.035 "name": "Existed_Raid", 00:12:03.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.035 "strip_size_kb": 0, 00:12:03.035 "state": "configuring", 00:12:03.035 "raid_level": "raid1", 00:12:03.035 "superblock": false, 00:12:03.035 "num_base_bdevs": 4, 00:12:03.035 "num_base_bdevs_discovered": 3, 00:12:03.035 "num_base_bdevs_operational": 4, 00:12:03.035 "base_bdevs_list": [ 00:12:03.035 { 00:12:03.035 "name": "BaseBdev1", 00:12:03.035 "uuid": "2587d48f-6546-4aeb-929e-8c8ec0018af3", 00:12:03.035 "is_configured": true, 00:12:03.035 "data_offset": 0, 00:12:03.035 "data_size": 65536 00:12:03.035 }, 00:12:03.035 { 00:12:03.035 "name": null, 00:12:03.035 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:03.035 "is_configured": false, 00:12:03.035 "data_offset": 0, 00:12:03.035 "data_size": 65536 00:12:03.035 }, 00:12:03.035 { 00:12:03.035 "name": "BaseBdev3", 00:12:03.035 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:03.035 "is_configured": true, 00:12:03.035 "data_offset": 0, 00:12:03.035 "data_size": 65536 00:12:03.035 }, 00:12:03.035 { 00:12:03.035 "name": "BaseBdev4", 00:12:03.035 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:03.035 "is_configured": true, 00:12:03.035 "data_offset": 0, 00:12:03.035 "data_size": 65536 00:12:03.035 } 00:12:03.035 ] 00:12:03.035 }' 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.035 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.625 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.625 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.625 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.625 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.625 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.625 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:03.625 18:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:03.625 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.625 18:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.625 [2024-11-26 18:58:54.922523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.881 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.881 "name": "Existed_Raid", 00:12:03.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.881 "strip_size_kb": 0, 00:12:03.881 "state": "configuring", 00:12:03.881 "raid_level": "raid1", 00:12:03.881 "superblock": false, 00:12:03.881 "num_base_bdevs": 4, 00:12:03.881 "num_base_bdevs_discovered": 2, 00:12:03.881 "num_base_bdevs_operational": 4, 00:12:03.881 "base_bdevs_list": [ 00:12:03.881 { 00:12:03.881 "name": null, 00:12:03.881 "uuid": "2587d48f-6546-4aeb-929e-8c8ec0018af3", 00:12:03.881 "is_configured": false, 00:12:03.881 "data_offset": 0, 00:12:03.881 "data_size": 65536 00:12:03.881 }, 00:12:03.881 { 00:12:03.881 "name": null, 00:12:03.881 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:03.882 "is_configured": false, 00:12:03.882 "data_offset": 0, 00:12:03.882 "data_size": 65536 00:12:03.882 }, 00:12:03.882 { 00:12:03.882 "name": "BaseBdev3", 00:12:03.882 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:03.882 "is_configured": true, 00:12:03.882 "data_offset": 0, 00:12:03.882 "data_size": 65536 00:12:03.882 }, 00:12:03.882 { 00:12:03.882 "name": "BaseBdev4", 00:12:03.882 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:03.882 "is_configured": true, 00:12:03.882 "data_offset": 0, 00:12:03.882 "data_size": 65536 00:12:03.882 } 00:12:03.882 ] 00:12:03.882 }' 00:12:03.882 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.882 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.447 [2024-11-26 18:58:55.595800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.447 "name": "Existed_Raid", 00:12:04.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.447 "strip_size_kb": 0, 00:12:04.447 "state": "configuring", 00:12:04.447 "raid_level": "raid1", 00:12:04.447 "superblock": false, 00:12:04.447 "num_base_bdevs": 4, 00:12:04.447 "num_base_bdevs_discovered": 3, 00:12:04.447 "num_base_bdevs_operational": 4, 00:12:04.447 "base_bdevs_list": [ 00:12:04.447 { 00:12:04.447 "name": null, 00:12:04.447 "uuid": "2587d48f-6546-4aeb-929e-8c8ec0018af3", 00:12:04.447 "is_configured": false, 00:12:04.447 "data_offset": 0, 00:12:04.447 "data_size": 65536 00:12:04.447 }, 00:12:04.447 { 00:12:04.447 "name": "BaseBdev2", 00:12:04.447 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:04.447 "is_configured": true, 00:12:04.447 "data_offset": 0, 00:12:04.447 "data_size": 65536 00:12:04.447 }, 00:12:04.447 { 00:12:04.447 "name": "BaseBdev3", 00:12:04.447 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:04.447 "is_configured": true, 00:12:04.447 "data_offset": 0, 00:12:04.447 "data_size": 65536 00:12:04.447 }, 00:12:04.447 { 00:12:04.447 "name": "BaseBdev4", 00:12:04.447 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:04.447 "is_configured": true, 00:12:04.447 "data_offset": 0, 00:12:04.447 "data_size": 65536 00:12:04.447 } 00:12:04.447 ] 00:12:04.447 }' 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.447 18:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2587d48f-6546-4aeb-929e-8c8ec0018af3 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.010 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.010 [2024-11-26 18:58:56.299669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:05.010 [2024-11-26 18:58:56.299743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:05.010 [2024-11-26 18:58:56.299760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:05.010 [2024-11-26 18:58:56.300136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:05.010 [2024-11-26 18:58:56.300363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:05.010 [2024-11-26 18:58:56.300389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:05.010 [2024-11-26 18:58:56.300704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.010 NewBaseBdev 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.011 [ 00:12:05.011 { 00:12:05.011 "name": "NewBaseBdev", 00:12:05.011 "aliases": [ 00:12:05.011 "2587d48f-6546-4aeb-929e-8c8ec0018af3" 00:12:05.011 ], 00:12:05.011 "product_name": "Malloc disk", 00:12:05.011 "block_size": 512, 00:12:05.011 "num_blocks": 65536, 00:12:05.011 "uuid": "2587d48f-6546-4aeb-929e-8c8ec0018af3", 00:12:05.011 "assigned_rate_limits": { 00:12:05.011 "rw_ios_per_sec": 0, 00:12:05.011 "rw_mbytes_per_sec": 0, 00:12:05.011 "r_mbytes_per_sec": 0, 00:12:05.011 "w_mbytes_per_sec": 0 00:12:05.011 }, 00:12:05.011 "claimed": true, 00:12:05.011 "claim_type": "exclusive_write", 00:12:05.011 "zoned": false, 00:12:05.011 "supported_io_types": { 00:12:05.011 "read": true, 00:12:05.011 "write": true, 00:12:05.011 "unmap": true, 00:12:05.011 "flush": true, 00:12:05.011 "reset": true, 00:12:05.011 "nvme_admin": false, 00:12:05.011 "nvme_io": false, 00:12:05.011 "nvme_io_md": false, 00:12:05.011 "write_zeroes": true, 00:12:05.011 "zcopy": true, 00:12:05.011 "get_zone_info": false, 00:12:05.011 "zone_management": false, 00:12:05.011 "zone_append": false, 00:12:05.011 "compare": false, 00:12:05.011 "compare_and_write": false, 00:12:05.011 "abort": true, 00:12:05.011 "seek_hole": false, 00:12:05.011 "seek_data": false, 00:12:05.011 "copy": true, 00:12:05.011 "nvme_iov_md": false 00:12:05.011 }, 00:12:05.011 "memory_domains": [ 00:12:05.011 { 00:12:05.011 "dma_device_id": "system", 00:12:05.011 "dma_device_type": 1 00:12:05.011 }, 00:12:05.011 { 00:12:05.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.011 "dma_device_type": 2 00:12:05.011 } 00:12:05.011 ], 00:12:05.011 "driver_specific": {} 00:12:05.011 } 00:12:05.011 ] 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.011 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.268 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.268 "name": "Existed_Raid", 00:12:05.268 "uuid": "49a41eed-b984-424f-8662-c51ac224983a", 00:12:05.268 "strip_size_kb": 0, 00:12:05.268 "state": "online", 00:12:05.268 "raid_level": "raid1", 00:12:05.268 "superblock": false, 00:12:05.268 "num_base_bdevs": 4, 00:12:05.268 "num_base_bdevs_discovered": 4, 00:12:05.268 "num_base_bdevs_operational": 4, 00:12:05.268 "base_bdevs_list": [ 00:12:05.268 { 00:12:05.268 "name": "NewBaseBdev", 00:12:05.268 "uuid": "2587d48f-6546-4aeb-929e-8c8ec0018af3", 00:12:05.268 "is_configured": true, 00:12:05.268 "data_offset": 0, 00:12:05.268 "data_size": 65536 00:12:05.268 }, 00:12:05.268 { 00:12:05.268 "name": "BaseBdev2", 00:12:05.268 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:05.268 "is_configured": true, 00:12:05.268 "data_offset": 0, 00:12:05.268 "data_size": 65536 00:12:05.268 }, 00:12:05.268 { 00:12:05.268 "name": "BaseBdev3", 00:12:05.268 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:05.268 "is_configured": true, 00:12:05.268 "data_offset": 0, 00:12:05.268 "data_size": 65536 00:12:05.268 }, 00:12:05.268 { 00:12:05.268 "name": "BaseBdev4", 00:12:05.268 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:05.268 "is_configured": true, 00:12:05.268 "data_offset": 0, 00:12:05.268 "data_size": 65536 00:12:05.268 } 00:12:05.268 ] 00:12:05.268 }' 00:12:05.268 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.268 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.525 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.525 [2024-11-26 18:58:56.888360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.782 18:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.782 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.782 "name": "Existed_Raid", 00:12:05.782 "aliases": [ 00:12:05.782 "49a41eed-b984-424f-8662-c51ac224983a" 00:12:05.782 ], 00:12:05.782 "product_name": "Raid Volume", 00:12:05.782 "block_size": 512, 00:12:05.782 "num_blocks": 65536, 00:12:05.782 "uuid": "49a41eed-b984-424f-8662-c51ac224983a", 00:12:05.782 "assigned_rate_limits": { 00:12:05.782 "rw_ios_per_sec": 0, 00:12:05.782 "rw_mbytes_per_sec": 0, 00:12:05.782 "r_mbytes_per_sec": 0, 00:12:05.782 "w_mbytes_per_sec": 0 00:12:05.782 }, 00:12:05.782 "claimed": false, 00:12:05.782 "zoned": false, 00:12:05.782 "supported_io_types": { 00:12:05.782 "read": true, 00:12:05.782 "write": true, 00:12:05.782 "unmap": false, 00:12:05.782 "flush": false, 00:12:05.782 "reset": true, 00:12:05.782 "nvme_admin": false, 00:12:05.782 "nvme_io": false, 00:12:05.782 "nvme_io_md": false, 00:12:05.782 "write_zeroes": true, 00:12:05.782 "zcopy": false, 00:12:05.782 "get_zone_info": false, 00:12:05.782 "zone_management": false, 00:12:05.782 "zone_append": false, 00:12:05.782 "compare": false, 00:12:05.782 "compare_and_write": false, 00:12:05.782 "abort": false, 00:12:05.782 "seek_hole": false, 00:12:05.782 "seek_data": false, 00:12:05.782 "copy": false, 00:12:05.782 "nvme_iov_md": false 00:12:05.782 }, 00:12:05.782 "memory_domains": [ 00:12:05.782 { 00:12:05.782 "dma_device_id": "system", 00:12:05.782 "dma_device_type": 1 00:12:05.782 }, 00:12:05.782 { 00:12:05.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.782 "dma_device_type": 2 00:12:05.782 }, 00:12:05.782 { 00:12:05.782 "dma_device_id": "system", 00:12:05.782 "dma_device_type": 1 00:12:05.782 }, 00:12:05.782 { 00:12:05.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.782 "dma_device_type": 2 00:12:05.782 }, 00:12:05.782 { 00:12:05.782 "dma_device_id": "system", 00:12:05.782 "dma_device_type": 1 00:12:05.782 }, 00:12:05.782 { 00:12:05.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.782 "dma_device_type": 2 00:12:05.782 }, 00:12:05.782 { 00:12:05.782 "dma_device_id": "system", 00:12:05.782 "dma_device_type": 1 00:12:05.782 }, 00:12:05.782 { 00:12:05.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.782 "dma_device_type": 2 00:12:05.782 } 00:12:05.782 ], 00:12:05.782 "driver_specific": { 00:12:05.782 "raid": { 00:12:05.782 "uuid": "49a41eed-b984-424f-8662-c51ac224983a", 00:12:05.783 "strip_size_kb": 0, 00:12:05.783 "state": "online", 00:12:05.783 "raid_level": "raid1", 00:12:05.783 "superblock": false, 00:12:05.783 "num_base_bdevs": 4, 00:12:05.783 "num_base_bdevs_discovered": 4, 00:12:05.783 "num_base_bdevs_operational": 4, 00:12:05.783 "base_bdevs_list": [ 00:12:05.783 { 00:12:05.783 "name": "NewBaseBdev", 00:12:05.783 "uuid": "2587d48f-6546-4aeb-929e-8c8ec0018af3", 00:12:05.783 "is_configured": true, 00:12:05.783 "data_offset": 0, 00:12:05.783 "data_size": 65536 00:12:05.783 }, 00:12:05.783 { 00:12:05.783 "name": "BaseBdev2", 00:12:05.783 "uuid": "be48575c-4fe3-40f7-a716-4570d9149101", 00:12:05.783 "is_configured": true, 00:12:05.783 "data_offset": 0, 00:12:05.783 "data_size": 65536 00:12:05.783 }, 00:12:05.783 { 00:12:05.783 "name": "BaseBdev3", 00:12:05.783 "uuid": "a557f120-7355-47ba-bb5b-74644822219f", 00:12:05.783 "is_configured": true, 00:12:05.783 "data_offset": 0, 00:12:05.783 "data_size": 65536 00:12:05.783 }, 00:12:05.783 { 00:12:05.783 "name": "BaseBdev4", 00:12:05.783 "uuid": "43689d04-2f1d-463b-b417-ec4833bf4d0e", 00:12:05.783 "is_configured": true, 00:12:05.783 "data_offset": 0, 00:12:05.783 "data_size": 65536 00:12:05.783 } 00:12:05.783 ] 00:12:05.783 } 00:12:05.783 } 00:12:05.783 }' 00:12:05.783 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.783 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:05.783 BaseBdev2 00:12:05.783 BaseBdev3 00:12:05.783 BaseBdev4' 00:12:05.783 18:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.783 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.066 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.067 [2024-11-26 18:58:57.268075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:06.067 [2024-11-26 18:58:57.268114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.067 [2024-11-26 18:58:57.268234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.067 [2024-11-26 18:58:57.268643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.067 [2024-11-26 18:58:57.268679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73365 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73365 ']' 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73365 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73365 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.067 killing process with pid 73365 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73365' 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73365 00:12:06.067 [2024-11-26 18:58:57.307111] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.067 18:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73365 00:12:06.630 [2024-11-26 18:58:57.689778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.561 ************************************ 00:12:07.561 END TEST raid_state_function_test 00:12:07.561 ************************************ 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:07.561 00:12:07.561 real 0m13.234s 00:12:07.561 user 0m21.887s 00:12:07.561 sys 0m1.841s 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.561 18:58:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:07.561 18:58:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:07.561 18:58:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.561 18:58:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.561 ************************************ 00:12:07.561 START TEST raid_state_function_test_sb 00:12:07.561 ************************************ 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.561 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74052 00:12:07.562 Process raid pid: 74052 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74052' 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74052 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74052 ']' 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.562 18:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.820 [2024-11-26 18:58:58.993149] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:12:07.820 [2024-11-26 18:58:58.993355] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.820 [2024-11-26 18:58:59.184732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.077 [2024-11-26 18:58:59.349381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.335 [2024-11-26 18:58:59.577005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.335 [2024-11-26 18:58:59.577078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.903 [2024-11-26 18:59:00.019188] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.903 [2024-11-26 18:59:00.019258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.903 [2024-11-26 18:59:00.019277] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.903 [2024-11-26 18:59:00.019294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.903 [2024-11-26 18:59:00.019304] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.903 [2024-11-26 18:59:00.019320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.903 [2024-11-26 18:59:00.019330] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:08.903 [2024-11-26 18:59:00.019345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.903 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.903 "name": "Existed_Raid", 00:12:08.904 "uuid": "d08431ed-d254-470c-a614-dba2710388b0", 00:12:08.904 "strip_size_kb": 0, 00:12:08.904 "state": "configuring", 00:12:08.904 "raid_level": "raid1", 00:12:08.904 "superblock": true, 00:12:08.904 "num_base_bdevs": 4, 00:12:08.904 "num_base_bdevs_discovered": 0, 00:12:08.904 "num_base_bdevs_operational": 4, 00:12:08.904 "base_bdevs_list": [ 00:12:08.904 { 00:12:08.904 "name": "BaseBdev1", 00:12:08.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.904 "is_configured": false, 00:12:08.904 "data_offset": 0, 00:12:08.904 "data_size": 0 00:12:08.904 }, 00:12:08.904 { 00:12:08.904 "name": "BaseBdev2", 00:12:08.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.904 "is_configured": false, 00:12:08.904 "data_offset": 0, 00:12:08.904 "data_size": 0 00:12:08.904 }, 00:12:08.904 { 00:12:08.904 "name": "BaseBdev3", 00:12:08.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.904 "is_configured": false, 00:12:08.904 "data_offset": 0, 00:12:08.904 "data_size": 0 00:12:08.904 }, 00:12:08.904 { 00:12:08.904 "name": "BaseBdev4", 00:12:08.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.904 "is_configured": false, 00:12:08.904 "data_offset": 0, 00:12:08.904 "data_size": 0 00:12:08.904 } 00:12:08.904 ] 00:12:08.904 }' 00:12:08.904 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.904 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.471 [2024-11-26 18:59:00.571303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.471 [2024-11-26 18:59:00.571358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.471 [2024-11-26 18:59:00.579291] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:09.471 [2024-11-26 18:59:00.579345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:09.471 [2024-11-26 18:59:00.579361] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.471 [2024-11-26 18:59:00.579377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.471 [2024-11-26 18:59:00.579397] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.471 [2024-11-26 18:59:00.579412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.471 [2024-11-26 18:59:00.579422] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:09.471 [2024-11-26 18:59:00.579436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.471 [2024-11-26 18:59:00.625268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.471 BaseBdev1 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.471 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.471 [ 00:12:09.471 { 00:12:09.471 "name": "BaseBdev1", 00:12:09.471 "aliases": [ 00:12:09.471 "23009ecc-5f55-4477-ac1a-960263547f98" 00:12:09.472 ], 00:12:09.472 "product_name": "Malloc disk", 00:12:09.472 "block_size": 512, 00:12:09.472 "num_blocks": 65536, 00:12:09.472 "uuid": "23009ecc-5f55-4477-ac1a-960263547f98", 00:12:09.472 "assigned_rate_limits": { 00:12:09.472 "rw_ios_per_sec": 0, 00:12:09.472 "rw_mbytes_per_sec": 0, 00:12:09.472 "r_mbytes_per_sec": 0, 00:12:09.472 "w_mbytes_per_sec": 0 00:12:09.472 }, 00:12:09.472 "claimed": true, 00:12:09.472 "claim_type": "exclusive_write", 00:12:09.472 "zoned": false, 00:12:09.472 "supported_io_types": { 00:12:09.472 "read": true, 00:12:09.472 "write": true, 00:12:09.472 "unmap": true, 00:12:09.472 "flush": true, 00:12:09.472 "reset": true, 00:12:09.472 "nvme_admin": false, 00:12:09.472 "nvme_io": false, 00:12:09.472 "nvme_io_md": false, 00:12:09.472 "write_zeroes": true, 00:12:09.472 "zcopy": true, 00:12:09.472 "get_zone_info": false, 00:12:09.472 "zone_management": false, 00:12:09.472 "zone_append": false, 00:12:09.472 "compare": false, 00:12:09.472 "compare_and_write": false, 00:12:09.472 "abort": true, 00:12:09.472 "seek_hole": false, 00:12:09.472 "seek_data": false, 00:12:09.472 "copy": true, 00:12:09.472 "nvme_iov_md": false 00:12:09.472 }, 00:12:09.472 "memory_domains": [ 00:12:09.472 { 00:12:09.472 "dma_device_id": "system", 00:12:09.472 "dma_device_type": 1 00:12:09.472 }, 00:12:09.472 { 00:12:09.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.472 "dma_device_type": 2 00:12:09.472 } 00:12:09.472 ], 00:12:09.472 "driver_specific": {} 00:12:09.472 } 00:12:09.472 ] 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.472 "name": "Existed_Raid", 00:12:09.472 "uuid": "967075e9-99d7-4287-9a8b-7b576006d749", 00:12:09.472 "strip_size_kb": 0, 00:12:09.472 "state": "configuring", 00:12:09.472 "raid_level": "raid1", 00:12:09.472 "superblock": true, 00:12:09.472 "num_base_bdevs": 4, 00:12:09.472 "num_base_bdevs_discovered": 1, 00:12:09.472 "num_base_bdevs_operational": 4, 00:12:09.472 "base_bdevs_list": [ 00:12:09.472 { 00:12:09.472 "name": "BaseBdev1", 00:12:09.472 "uuid": "23009ecc-5f55-4477-ac1a-960263547f98", 00:12:09.472 "is_configured": true, 00:12:09.472 "data_offset": 2048, 00:12:09.472 "data_size": 63488 00:12:09.472 }, 00:12:09.472 { 00:12:09.472 "name": "BaseBdev2", 00:12:09.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.472 "is_configured": false, 00:12:09.472 "data_offset": 0, 00:12:09.472 "data_size": 0 00:12:09.472 }, 00:12:09.472 { 00:12:09.472 "name": "BaseBdev3", 00:12:09.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.472 "is_configured": false, 00:12:09.472 "data_offset": 0, 00:12:09.472 "data_size": 0 00:12:09.472 }, 00:12:09.472 { 00:12:09.472 "name": "BaseBdev4", 00:12:09.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.472 "is_configured": false, 00:12:09.472 "data_offset": 0, 00:12:09.472 "data_size": 0 00:12:09.472 } 00:12:09.472 ] 00:12:09.472 }' 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.472 18:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.039 [2024-11-26 18:59:01.241495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.039 [2024-11-26 18:59:01.241568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.039 [2024-11-26 18:59:01.249556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.039 [2024-11-26 18:59:01.252162] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.039 [2024-11-26 18:59:01.252220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.039 [2024-11-26 18:59:01.252238] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:10.039 [2024-11-26 18:59:01.252257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:10.039 [2024-11-26 18:59:01.252268] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:10.039 [2024-11-26 18:59:01.252281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:10.039 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.040 "name": "Existed_Raid", 00:12:10.040 "uuid": "3d68023d-6250-4ca8-8846-58c0070c58ef", 00:12:10.040 "strip_size_kb": 0, 00:12:10.040 "state": "configuring", 00:12:10.040 "raid_level": "raid1", 00:12:10.040 "superblock": true, 00:12:10.040 "num_base_bdevs": 4, 00:12:10.040 "num_base_bdevs_discovered": 1, 00:12:10.040 "num_base_bdevs_operational": 4, 00:12:10.040 "base_bdevs_list": [ 00:12:10.040 { 00:12:10.040 "name": "BaseBdev1", 00:12:10.040 "uuid": "23009ecc-5f55-4477-ac1a-960263547f98", 00:12:10.040 "is_configured": true, 00:12:10.040 "data_offset": 2048, 00:12:10.040 "data_size": 63488 00:12:10.040 }, 00:12:10.040 { 00:12:10.040 "name": "BaseBdev2", 00:12:10.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.040 "is_configured": false, 00:12:10.040 "data_offset": 0, 00:12:10.040 "data_size": 0 00:12:10.040 }, 00:12:10.040 { 00:12:10.040 "name": "BaseBdev3", 00:12:10.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.040 "is_configured": false, 00:12:10.040 "data_offset": 0, 00:12:10.040 "data_size": 0 00:12:10.040 }, 00:12:10.040 { 00:12:10.040 "name": "BaseBdev4", 00:12:10.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.040 "is_configured": false, 00:12:10.040 "data_offset": 0, 00:12:10.040 "data_size": 0 00:12:10.040 } 00:12:10.040 ] 00:12:10.040 }' 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.040 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.606 [2024-11-26 18:59:01.785290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.606 BaseBdev2 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.606 [ 00:12:10.606 { 00:12:10.606 "name": "BaseBdev2", 00:12:10.606 "aliases": [ 00:12:10.606 "75bf918b-e5c7-4ab6-8339-a2530c28ff6c" 00:12:10.606 ], 00:12:10.606 "product_name": "Malloc disk", 00:12:10.606 "block_size": 512, 00:12:10.606 "num_blocks": 65536, 00:12:10.606 "uuid": "75bf918b-e5c7-4ab6-8339-a2530c28ff6c", 00:12:10.606 "assigned_rate_limits": { 00:12:10.606 "rw_ios_per_sec": 0, 00:12:10.606 "rw_mbytes_per_sec": 0, 00:12:10.606 "r_mbytes_per_sec": 0, 00:12:10.606 "w_mbytes_per_sec": 0 00:12:10.606 }, 00:12:10.606 "claimed": true, 00:12:10.606 "claim_type": "exclusive_write", 00:12:10.606 "zoned": false, 00:12:10.606 "supported_io_types": { 00:12:10.606 "read": true, 00:12:10.606 "write": true, 00:12:10.606 "unmap": true, 00:12:10.606 "flush": true, 00:12:10.606 "reset": true, 00:12:10.606 "nvme_admin": false, 00:12:10.606 "nvme_io": false, 00:12:10.606 "nvme_io_md": false, 00:12:10.606 "write_zeroes": true, 00:12:10.606 "zcopy": true, 00:12:10.606 "get_zone_info": false, 00:12:10.606 "zone_management": false, 00:12:10.606 "zone_append": false, 00:12:10.606 "compare": false, 00:12:10.606 "compare_and_write": false, 00:12:10.606 "abort": true, 00:12:10.606 "seek_hole": false, 00:12:10.606 "seek_data": false, 00:12:10.606 "copy": true, 00:12:10.606 "nvme_iov_md": false 00:12:10.606 }, 00:12:10.606 "memory_domains": [ 00:12:10.606 { 00:12:10.606 "dma_device_id": "system", 00:12:10.606 "dma_device_type": 1 00:12:10.606 }, 00:12:10.606 { 00:12:10.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.606 "dma_device_type": 2 00:12:10.606 } 00:12:10.606 ], 00:12:10.606 "driver_specific": {} 00:12:10.606 } 00:12:10.606 ] 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.606 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.607 "name": "Existed_Raid", 00:12:10.607 "uuid": "3d68023d-6250-4ca8-8846-58c0070c58ef", 00:12:10.607 "strip_size_kb": 0, 00:12:10.607 "state": "configuring", 00:12:10.607 "raid_level": "raid1", 00:12:10.607 "superblock": true, 00:12:10.607 "num_base_bdevs": 4, 00:12:10.607 "num_base_bdevs_discovered": 2, 00:12:10.607 "num_base_bdevs_operational": 4, 00:12:10.607 "base_bdevs_list": [ 00:12:10.607 { 00:12:10.607 "name": "BaseBdev1", 00:12:10.607 "uuid": "23009ecc-5f55-4477-ac1a-960263547f98", 00:12:10.607 "is_configured": true, 00:12:10.607 "data_offset": 2048, 00:12:10.607 "data_size": 63488 00:12:10.607 }, 00:12:10.607 { 00:12:10.607 "name": "BaseBdev2", 00:12:10.607 "uuid": "75bf918b-e5c7-4ab6-8339-a2530c28ff6c", 00:12:10.607 "is_configured": true, 00:12:10.607 "data_offset": 2048, 00:12:10.607 "data_size": 63488 00:12:10.607 }, 00:12:10.607 { 00:12:10.607 "name": "BaseBdev3", 00:12:10.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.607 "is_configured": false, 00:12:10.607 "data_offset": 0, 00:12:10.607 "data_size": 0 00:12:10.607 }, 00:12:10.607 { 00:12:10.607 "name": "BaseBdev4", 00:12:10.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.607 "is_configured": false, 00:12:10.607 "data_offset": 0, 00:12:10.607 "data_size": 0 00:12:10.607 } 00:12:10.607 ] 00:12:10.607 }' 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.607 18:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.172 [2024-11-26 18:59:02.367010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.172 BaseBdev3 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.172 [ 00:12:11.172 { 00:12:11.172 "name": "BaseBdev3", 00:12:11.172 "aliases": [ 00:12:11.172 "1f9b3f0a-6c90-4aa7-a1e9-361916c8cf63" 00:12:11.172 ], 00:12:11.172 "product_name": "Malloc disk", 00:12:11.172 "block_size": 512, 00:12:11.172 "num_blocks": 65536, 00:12:11.172 "uuid": "1f9b3f0a-6c90-4aa7-a1e9-361916c8cf63", 00:12:11.172 "assigned_rate_limits": { 00:12:11.172 "rw_ios_per_sec": 0, 00:12:11.172 "rw_mbytes_per_sec": 0, 00:12:11.172 "r_mbytes_per_sec": 0, 00:12:11.172 "w_mbytes_per_sec": 0 00:12:11.172 }, 00:12:11.172 "claimed": true, 00:12:11.172 "claim_type": "exclusive_write", 00:12:11.172 "zoned": false, 00:12:11.172 "supported_io_types": { 00:12:11.172 "read": true, 00:12:11.172 "write": true, 00:12:11.172 "unmap": true, 00:12:11.172 "flush": true, 00:12:11.172 "reset": true, 00:12:11.172 "nvme_admin": false, 00:12:11.172 "nvme_io": false, 00:12:11.172 "nvme_io_md": false, 00:12:11.172 "write_zeroes": true, 00:12:11.172 "zcopy": true, 00:12:11.172 "get_zone_info": false, 00:12:11.172 "zone_management": false, 00:12:11.172 "zone_append": false, 00:12:11.172 "compare": false, 00:12:11.172 "compare_and_write": false, 00:12:11.172 "abort": true, 00:12:11.172 "seek_hole": false, 00:12:11.172 "seek_data": false, 00:12:11.172 "copy": true, 00:12:11.172 "nvme_iov_md": false 00:12:11.172 }, 00:12:11.172 "memory_domains": [ 00:12:11.172 { 00:12:11.172 "dma_device_id": "system", 00:12:11.172 "dma_device_type": 1 00:12:11.172 }, 00:12:11.172 { 00:12:11.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.172 "dma_device_type": 2 00:12:11.172 } 00:12:11.172 ], 00:12:11.172 "driver_specific": {} 00:12:11.172 } 00:12:11.172 ] 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.172 "name": "Existed_Raid", 00:12:11.172 "uuid": "3d68023d-6250-4ca8-8846-58c0070c58ef", 00:12:11.172 "strip_size_kb": 0, 00:12:11.172 "state": "configuring", 00:12:11.172 "raid_level": "raid1", 00:12:11.172 "superblock": true, 00:12:11.172 "num_base_bdevs": 4, 00:12:11.172 "num_base_bdevs_discovered": 3, 00:12:11.172 "num_base_bdevs_operational": 4, 00:12:11.172 "base_bdevs_list": [ 00:12:11.172 { 00:12:11.172 "name": "BaseBdev1", 00:12:11.172 "uuid": "23009ecc-5f55-4477-ac1a-960263547f98", 00:12:11.172 "is_configured": true, 00:12:11.172 "data_offset": 2048, 00:12:11.172 "data_size": 63488 00:12:11.172 }, 00:12:11.172 { 00:12:11.172 "name": "BaseBdev2", 00:12:11.172 "uuid": "75bf918b-e5c7-4ab6-8339-a2530c28ff6c", 00:12:11.172 "is_configured": true, 00:12:11.172 "data_offset": 2048, 00:12:11.172 "data_size": 63488 00:12:11.172 }, 00:12:11.172 { 00:12:11.172 "name": "BaseBdev3", 00:12:11.172 "uuid": "1f9b3f0a-6c90-4aa7-a1e9-361916c8cf63", 00:12:11.172 "is_configured": true, 00:12:11.172 "data_offset": 2048, 00:12:11.172 "data_size": 63488 00:12:11.172 }, 00:12:11.172 { 00:12:11.172 "name": "BaseBdev4", 00:12:11.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.172 "is_configured": false, 00:12:11.172 "data_offset": 0, 00:12:11.172 "data_size": 0 00:12:11.172 } 00:12:11.172 ] 00:12:11.172 }' 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.172 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.738 [2024-11-26 18:59:02.902041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.738 [2024-11-26 18:59:02.902672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:11.738 [2024-11-26 18:59:02.902701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.738 [2024-11-26 18:59:02.903108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:11.738 BaseBdev4 00:12:11.738 [2024-11-26 18:59:02.903340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:11.738 [2024-11-26 18:59:02.903364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:11.738 [2024-11-26 18:59:02.903583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.738 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.738 [ 00:12:11.739 { 00:12:11.739 "name": "BaseBdev4", 00:12:11.739 "aliases": [ 00:12:11.739 "4e006d4f-e586-4d0e-a6f8-f2cc3684940c" 00:12:11.739 ], 00:12:11.739 "product_name": "Malloc disk", 00:12:11.739 "block_size": 512, 00:12:11.739 "num_blocks": 65536, 00:12:11.739 "uuid": "4e006d4f-e586-4d0e-a6f8-f2cc3684940c", 00:12:11.739 "assigned_rate_limits": { 00:12:11.739 "rw_ios_per_sec": 0, 00:12:11.739 "rw_mbytes_per_sec": 0, 00:12:11.739 "r_mbytes_per_sec": 0, 00:12:11.739 "w_mbytes_per_sec": 0 00:12:11.739 }, 00:12:11.739 "claimed": true, 00:12:11.739 "claim_type": "exclusive_write", 00:12:11.739 "zoned": false, 00:12:11.739 "supported_io_types": { 00:12:11.739 "read": true, 00:12:11.739 "write": true, 00:12:11.739 "unmap": true, 00:12:11.739 "flush": true, 00:12:11.739 "reset": true, 00:12:11.739 "nvme_admin": false, 00:12:11.739 "nvme_io": false, 00:12:11.739 "nvme_io_md": false, 00:12:11.739 "write_zeroes": true, 00:12:11.739 "zcopy": true, 00:12:11.739 "get_zone_info": false, 00:12:11.739 "zone_management": false, 00:12:11.739 "zone_append": false, 00:12:11.739 "compare": false, 00:12:11.739 "compare_and_write": false, 00:12:11.739 "abort": true, 00:12:11.739 "seek_hole": false, 00:12:11.739 "seek_data": false, 00:12:11.739 "copy": true, 00:12:11.739 "nvme_iov_md": false 00:12:11.739 }, 00:12:11.739 "memory_domains": [ 00:12:11.739 { 00:12:11.739 "dma_device_id": "system", 00:12:11.739 "dma_device_type": 1 00:12:11.739 }, 00:12:11.739 { 00:12:11.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.739 "dma_device_type": 2 00:12:11.739 } 00:12:11.739 ], 00:12:11.739 "driver_specific": {} 00:12:11.739 } 00:12:11.739 ] 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.739 "name": "Existed_Raid", 00:12:11.739 "uuid": "3d68023d-6250-4ca8-8846-58c0070c58ef", 00:12:11.739 "strip_size_kb": 0, 00:12:11.739 "state": "online", 00:12:11.739 "raid_level": "raid1", 00:12:11.739 "superblock": true, 00:12:11.739 "num_base_bdevs": 4, 00:12:11.739 "num_base_bdevs_discovered": 4, 00:12:11.739 "num_base_bdevs_operational": 4, 00:12:11.739 "base_bdevs_list": [ 00:12:11.739 { 00:12:11.739 "name": "BaseBdev1", 00:12:11.739 "uuid": "23009ecc-5f55-4477-ac1a-960263547f98", 00:12:11.739 "is_configured": true, 00:12:11.739 "data_offset": 2048, 00:12:11.739 "data_size": 63488 00:12:11.739 }, 00:12:11.739 { 00:12:11.739 "name": "BaseBdev2", 00:12:11.739 "uuid": "75bf918b-e5c7-4ab6-8339-a2530c28ff6c", 00:12:11.739 "is_configured": true, 00:12:11.739 "data_offset": 2048, 00:12:11.739 "data_size": 63488 00:12:11.739 }, 00:12:11.739 { 00:12:11.739 "name": "BaseBdev3", 00:12:11.739 "uuid": "1f9b3f0a-6c90-4aa7-a1e9-361916c8cf63", 00:12:11.739 "is_configured": true, 00:12:11.739 "data_offset": 2048, 00:12:11.739 "data_size": 63488 00:12:11.739 }, 00:12:11.739 { 00:12:11.739 "name": "BaseBdev4", 00:12:11.739 "uuid": "4e006d4f-e586-4d0e-a6f8-f2cc3684940c", 00:12:11.739 "is_configured": true, 00:12:11.739 "data_offset": 2048, 00:12:11.739 "data_size": 63488 00:12:11.739 } 00:12:11.739 ] 00:12:11.739 }' 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.739 18:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.307 [2024-11-26 18:59:03.454678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.307 "name": "Existed_Raid", 00:12:12.307 "aliases": [ 00:12:12.307 "3d68023d-6250-4ca8-8846-58c0070c58ef" 00:12:12.307 ], 00:12:12.307 "product_name": "Raid Volume", 00:12:12.307 "block_size": 512, 00:12:12.307 "num_blocks": 63488, 00:12:12.307 "uuid": "3d68023d-6250-4ca8-8846-58c0070c58ef", 00:12:12.307 "assigned_rate_limits": { 00:12:12.307 "rw_ios_per_sec": 0, 00:12:12.307 "rw_mbytes_per_sec": 0, 00:12:12.307 "r_mbytes_per_sec": 0, 00:12:12.307 "w_mbytes_per_sec": 0 00:12:12.307 }, 00:12:12.307 "claimed": false, 00:12:12.307 "zoned": false, 00:12:12.307 "supported_io_types": { 00:12:12.307 "read": true, 00:12:12.307 "write": true, 00:12:12.307 "unmap": false, 00:12:12.307 "flush": false, 00:12:12.307 "reset": true, 00:12:12.307 "nvme_admin": false, 00:12:12.307 "nvme_io": false, 00:12:12.307 "nvme_io_md": false, 00:12:12.307 "write_zeroes": true, 00:12:12.307 "zcopy": false, 00:12:12.307 "get_zone_info": false, 00:12:12.307 "zone_management": false, 00:12:12.307 "zone_append": false, 00:12:12.307 "compare": false, 00:12:12.307 "compare_and_write": false, 00:12:12.307 "abort": false, 00:12:12.307 "seek_hole": false, 00:12:12.307 "seek_data": false, 00:12:12.307 "copy": false, 00:12:12.307 "nvme_iov_md": false 00:12:12.307 }, 00:12:12.307 "memory_domains": [ 00:12:12.307 { 00:12:12.307 "dma_device_id": "system", 00:12:12.307 "dma_device_type": 1 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.307 "dma_device_type": 2 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "dma_device_id": "system", 00:12:12.307 "dma_device_type": 1 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.307 "dma_device_type": 2 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "dma_device_id": "system", 00:12:12.307 "dma_device_type": 1 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.307 "dma_device_type": 2 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "dma_device_id": "system", 00:12:12.307 "dma_device_type": 1 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.307 "dma_device_type": 2 00:12:12.307 } 00:12:12.307 ], 00:12:12.307 "driver_specific": { 00:12:12.307 "raid": { 00:12:12.307 "uuid": "3d68023d-6250-4ca8-8846-58c0070c58ef", 00:12:12.307 "strip_size_kb": 0, 00:12:12.307 "state": "online", 00:12:12.307 "raid_level": "raid1", 00:12:12.307 "superblock": true, 00:12:12.307 "num_base_bdevs": 4, 00:12:12.307 "num_base_bdevs_discovered": 4, 00:12:12.307 "num_base_bdevs_operational": 4, 00:12:12.307 "base_bdevs_list": [ 00:12:12.307 { 00:12:12.307 "name": "BaseBdev1", 00:12:12.307 "uuid": "23009ecc-5f55-4477-ac1a-960263547f98", 00:12:12.307 "is_configured": true, 00:12:12.307 "data_offset": 2048, 00:12:12.307 "data_size": 63488 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "name": "BaseBdev2", 00:12:12.307 "uuid": "75bf918b-e5c7-4ab6-8339-a2530c28ff6c", 00:12:12.307 "is_configured": true, 00:12:12.307 "data_offset": 2048, 00:12:12.307 "data_size": 63488 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "name": "BaseBdev3", 00:12:12.307 "uuid": "1f9b3f0a-6c90-4aa7-a1e9-361916c8cf63", 00:12:12.307 "is_configured": true, 00:12:12.307 "data_offset": 2048, 00:12:12.307 "data_size": 63488 00:12:12.307 }, 00:12:12.307 { 00:12:12.307 "name": "BaseBdev4", 00:12:12.307 "uuid": "4e006d4f-e586-4d0e-a6f8-f2cc3684940c", 00:12:12.307 "is_configured": true, 00:12:12.307 "data_offset": 2048, 00:12:12.307 "data_size": 63488 00:12:12.307 } 00:12:12.307 ] 00:12:12.307 } 00:12:12.307 } 00:12:12.307 }' 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.307 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:12.307 BaseBdev2 00:12:12.308 BaseBdev3 00:12:12.308 BaseBdev4' 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.308 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.566 [2024-11-26 18:59:03.834442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.566 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.824 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.824 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.824 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.824 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.824 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.824 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.824 "name": "Existed_Raid", 00:12:12.824 "uuid": "3d68023d-6250-4ca8-8846-58c0070c58ef", 00:12:12.824 "strip_size_kb": 0, 00:12:12.824 "state": "online", 00:12:12.824 "raid_level": "raid1", 00:12:12.824 "superblock": true, 00:12:12.824 "num_base_bdevs": 4, 00:12:12.824 "num_base_bdevs_discovered": 3, 00:12:12.824 "num_base_bdevs_operational": 3, 00:12:12.824 "base_bdevs_list": [ 00:12:12.824 { 00:12:12.824 "name": null, 00:12:12.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.824 "is_configured": false, 00:12:12.824 "data_offset": 0, 00:12:12.824 "data_size": 63488 00:12:12.824 }, 00:12:12.824 { 00:12:12.824 "name": "BaseBdev2", 00:12:12.824 "uuid": "75bf918b-e5c7-4ab6-8339-a2530c28ff6c", 00:12:12.824 "is_configured": true, 00:12:12.824 "data_offset": 2048, 00:12:12.824 "data_size": 63488 00:12:12.824 }, 00:12:12.824 { 00:12:12.824 "name": "BaseBdev3", 00:12:12.824 "uuid": "1f9b3f0a-6c90-4aa7-a1e9-361916c8cf63", 00:12:12.824 "is_configured": true, 00:12:12.824 "data_offset": 2048, 00:12:12.824 "data_size": 63488 00:12:12.824 }, 00:12:12.824 { 00:12:12.824 "name": "BaseBdev4", 00:12:12.824 "uuid": "4e006d4f-e586-4d0e-a6f8-f2cc3684940c", 00:12:12.824 "is_configured": true, 00:12:12.824 "data_offset": 2048, 00:12:12.824 "data_size": 63488 00:12:12.824 } 00:12:12.824 ] 00:12:12.824 }' 00:12:12.824 18:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.824 18:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.083 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:13.083 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.083 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.083 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.083 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.083 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.367 [2024-11-26 18:59:04.494443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.367 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.367 [2024-11-26 18:59:04.667154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.640 [2024-11-26 18:59:04.820627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:13.640 [2024-11-26 18:59:04.820803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.640 [2024-11-26 18:59:04.913846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.640 [2024-11-26 18:59:04.913989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.640 [2024-11-26 18:59:04.914017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.640 18:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.900 BaseBdev2 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.900 [ 00:12:13.900 { 00:12:13.900 "name": "BaseBdev2", 00:12:13.900 "aliases": [ 00:12:13.900 "629defcb-3578-46dd-aad8-48c70908f679" 00:12:13.900 ], 00:12:13.900 "product_name": "Malloc disk", 00:12:13.900 "block_size": 512, 00:12:13.900 "num_blocks": 65536, 00:12:13.900 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:13.900 "assigned_rate_limits": { 00:12:13.900 "rw_ios_per_sec": 0, 00:12:13.900 "rw_mbytes_per_sec": 0, 00:12:13.900 "r_mbytes_per_sec": 0, 00:12:13.900 "w_mbytes_per_sec": 0 00:12:13.900 }, 00:12:13.900 "claimed": false, 00:12:13.900 "zoned": false, 00:12:13.900 "supported_io_types": { 00:12:13.900 "read": true, 00:12:13.900 "write": true, 00:12:13.900 "unmap": true, 00:12:13.900 "flush": true, 00:12:13.900 "reset": true, 00:12:13.900 "nvme_admin": false, 00:12:13.900 "nvme_io": false, 00:12:13.900 "nvme_io_md": false, 00:12:13.900 "write_zeroes": true, 00:12:13.900 "zcopy": true, 00:12:13.900 "get_zone_info": false, 00:12:13.900 "zone_management": false, 00:12:13.900 "zone_append": false, 00:12:13.900 "compare": false, 00:12:13.900 "compare_and_write": false, 00:12:13.900 "abort": true, 00:12:13.900 "seek_hole": false, 00:12:13.900 "seek_data": false, 00:12:13.900 "copy": true, 00:12:13.900 "nvme_iov_md": false 00:12:13.900 }, 00:12:13.900 "memory_domains": [ 00:12:13.900 { 00:12:13.900 "dma_device_id": "system", 00:12:13.900 "dma_device_type": 1 00:12:13.900 }, 00:12:13.900 { 00:12:13.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.900 "dma_device_type": 2 00:12:13.900 } 00:12:13.900 ], 00:12:13.900 "driver_specific": {} 00:12:13.900 } 00:12:13.900 ] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.900 BaseBdev3 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.900 [ 00:12:13.900 { 00:12:13.900 "name": "BaseBdev3", 00:12:13.900 "aliases": [ 00:12:13.900 "ee8a2390-77c9-46d3-83b9-18715783a15c" 00:12:13.900 ], 00:12:13.900 "product_name": "Malloc disk", 00:12:13.900 "block_size": 512, 00:12:13.900 "num_blocks": 65536, 00:12:13.900 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:13.900 "assigned_rate_limits": { 00:12:13.900 "rw_ios_per_sec": 0, 00:12:13.900 "rw_mbytes_per_sec": 0, 00:12:13.900 "r_mbytes_per_sec": 0, 00:12:13.900 "w_mbytes_per_sec": 0 00:12:13.900 }, 00:12:13.900 "claimed": false, 00:12:13.900 "zoned": false, 00:12:13.900 "supported_io_types": { 00:12:13.900 "read": true, 00:12:13.900 "write": true, 00:12:13.900 "unmap": true, 00:12:13.900 "flush": true, 00:12:13.900 "reset": true, 00:12:13.900 "nvme_admin": false, 00:12:13.900 "nvme_io": false, 00:12:13.900 "nvme_io_md": false, 00:12:13.900 "write_zeroes": true, 00:12:13.900 "zcopy": true, 00:12:13.900 "get_zone_info": false, 00:12:13.900 "zone_management": false, 00:12:13.900 "zone_append": false, 00:12:13.900 "compare": false, 00:12:13.900 "compare_and_write": false, 00:12:13.900 "abort": true, 00:12:13.900 "seek_hole": false, 00:12:13.900 "seek_data": false, 00:12:13.900 "copy": true, 00:12:13.900 "nvme_iov_md": false 00:12:13.900 }, 00:12:13.900 "memory_domains": [ 00:12:13.900 { 00:12:13.900 "dma_device_id": "system", 00:12:13.900 "dma_device_type": 1 00:12:13.900 }, 00:12:13.900 { 00:12:13.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.900 "dma_device_type": 2 00:12:13.900 } 00:12:13.900 ], 00:12:13.900 "driver_specific": {} 00:12:13.900 } 00:12:13.900 ] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.900 BaseBdev4 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.900 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.901 [ 00:12:13.901 { 00:12:13.901 "name": "BaseBdev4", 00:12:13.901 "aliases": [ 00:12:13.901 "7d85089e-89bd-4757-ba02-8d0ddaef13dc" 00:12:13.901 ], 00:12:13.901 "product_name": "Malloc disk", 00:12:13.901 "block_size": 512, 00:12:13.901 "num_blocks": 65536, 00:12:13.901 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:13.901 "assigned_rate_limits": { 00:12:13.901 "rw_ios_per_sec": 0, 00:12:13.901 "rw_mbytes_per_sec": 0, 00:12:13.901 "r_mbytes_per_sec": 0, 00:12:13.901 "w_mbytes_per_sec": 0 00:12:13.901 }, 00:12:13.901 "claimed": false, 00:12:13.901 "zoned": false, 00:12:13.901 "supported_io_types": { 00:12:13.901 "read": true, 00:12:13.901 "write": true, 00:12:13.901 "unmap": true, 00:12:13.901 "flush": true, 00:12:13.901 "reset": true, 00:12:13.901 "nvme_admin": false, 00:12:13.901 "nvme_io": false, 00:12:13.901 "nvme_io_md": false, 00:12:13.901 "write_zeroes": true, 00:12:13.901 "zcopy": true, 00:12:13.901 "get_zone_info": false, 00:12:13.901 "zone_management": false, 00:12:13.901 "zone_append": false, 00:12:13.901 "compare": false, 00:12:13.901 "compare_and_write": false, 00:12:13.901 "abort": true, 00:12:13.901 "seek_hole": false, 00:12:13.901 "seek_data": false, 00:12:13.901 "copy": true, 00:12:13.901 "nvme_iov_md": false 00:12:13.901 }, 00:12:13.901 "memory_domains": [ 00:12:13.901 { 00:12:13.901 "dma_device_id": "system", 00:12:13.901 "dma_device_type": 1 00:12:13.901 }, 00:12:13.901 { 00:12:13.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.901 "dma_device_type": 2 00:12:13.901 } 00:12:13.901 ], 00:12:13.901 "driver_specific": {} 00:12:13.901 } 00:12:13.901 ] 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.901 [2024-11-26 18:59:05.202807] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.901 [2024-11-26 18:59:05.202883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.901 [2024-11-26 18:59:05.202939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.901 [2024-11-26 18:59:05.205561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.901 [2024-11-26 18:59:05.205788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.901 "name": "Existed_Raid", 00:12:13.901 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:13.901 "strip_size_kb": 0, 00:12:13.901 "state": "configuring", 00:12:13.901 "raid_level": "raid1", 00:12:13.901 "superblock": true, 00:12:13.901 "num_base_bdevs": 4, 00:12:13.901 "num_base_bdevs_discovered": 3, 00:12:13.901 "num_base_bdevs_operational": 4, 00:12:13.901 "base_bdevs_list": [ 00:12:13.901 { 00:12:13.901 "name": "BaseBdev1", 00:12:13.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.901 "is_configured": false, 00:12:13.901 "data_offset": 0, 00:12:13.901 "data_size": 0 00:12:13.901 }, 00:12:13.901 { 00:12:13.901 "name": "BaseBdev2", 00:12:13.901 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:13.901 "is_configured": true, 00:12:13.901 "data_offset": 2048, 00:12:13.901 "data_size": 63488 00:12:13.901 }, 00:12:13.901 { 00:12:13.901 "name": "BaseBdev3", 00:12:13.901 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:13.901 "is_configured": true, 00:12:13.901 "data_offset": 2048, 00:12:13.901 "data_size": 63488 00:12:13.901 }, 00:12:13.901 { 00:12:13.901 "name": "BaseBdev4", 00:12:13.901 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:13.901 "is_configured": true, 00:12:13.901 "data_offset": 2048, 00:12:13.901 "data_size": 63488 00:12:13.901 } 00:12:13.901 ] 00:12:13.901 }' 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.901 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.468 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:14.468 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.468 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.468 [2024-11-26 18:59:05.678985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:14.468 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.469 "name": "Existed_Raid", 00:12:14.469 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:14.469 "strip_size_kb": 0, 00:12:14.469 "state": "configuring", 00:12:14.469 "raid_level": "raid1", 00:12:14.469 "superblock": true, 00:12:14.469 "num_base_bdevs": 4, 00:12:14.469 "num_base_bdevs_discovered": 2, 00:12:14.469 "num_base_bdevs_operational": 4, 00:12:14.469 "base_bdevs_list": [ 00:12:14.469 { 00:12:14.469 "name": "BaseBdev1", 00:12:14.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.469 "is_configured": false, 00:12:14.469 "data_offset": 0, 00:12:14.469 "data_size": 0 00:12:14.469 }, 00:12:14.469 { 00:12:14.469 "name": null, 00:12:14.469 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:14.469 "is_configured": false, 00:12:14.469 "data_offset": 0, 00:12:14.469 "data_size": 63488 00:12:14.469 }, 00:12:14.469 { 00:12:14.469 "name": "BaseBdev3", 00:12:14.469 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:14.469 "is_configured": true, 00:12:14.469 "data_offset": 2048, 00:12:14.469 "data_size": 63488 00:12:14.469 }, 00:12:14.469 { 00:12:14.469 "name": "BaseBdev4", 00:12:14.469 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:14.469 "is_configured": true, 00:12:14.469 "data_offset": 2048, 00:12:14.469 "data_size": 63488 00:12:14.469 } 00:12:14.469 ] 00:12:14.469 }' 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.469 18:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.035 [2024-11-26 18:59:06.288711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.035 BaseBdev1 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.035 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.035 [ 00:12:15.035 { 00:12:15.035 "name": "BaseBdev1", 00:12:15.035 "aliases": [ 00:12:15.035 "54b10551-2ce9-4aaa-ad73-3f75d68230c3" 00:12:15.035 ], 00:12:15.035 "product_name": "Malloc disk", 00:12:15.035 "block_size": 512, 00:12:15.035 "num_blocks": 65536, 00:12:15.035 "uuid": "54b10551-2ce9-4aaa-ad73-3f75d68230c3", 00:12:15.035 "assigned_rate_limits": { 00:12:15.035 "rw_ios_per_sec": 0, 00:12:15.035 "rw_mbytes_per_sec": 0, 00:12:15.035 "r_mbytes_per_sec": 0, 00:12:15.035 "w_mbytes_per_sec": 0 00:12:15.035 }, 00:12:15.035 "claimed": true, 00:12:15.035 "claim_type": "exclusive_write", 00:12:15.035 "zoned": false, 00:12:15.035 "supported_io_types": { 00:12:15.035 "read": true, 00:12:15.035 "write": true, 00:12:15.035 "unmap": true, 00:12:15.035 "flush": true, 00:12:15.035 "reset": true, 00:12:15.035 "nvme_admin": false, 00:12:15.035 "nvme_io": false, 00:12:15.035 "nvme_io_md": false, 00:12:15.035 "write_zeroes": true, 00:12:15.035 "zcopy": true, 00:12:15.035 "get_zone_info": false, 00:12:15.035 "zone_management": false, 00:12:15.035 "zone_append": false, 00:12:15.035 "compare": false, 00:12:15.035 "compare_and_write": false, 00:12:15.035 "abort": true, 00:12:15.036 "seek_hole": false, 00:12:15.036 "seek_data": false, 00:12:15.036 "copy": true, 00:12:15.036 "nvme_iov_md": false 00:12:15.036 }, 00:12:15.036 "memory_domains": [ 00:12:15.036 { 00:12:15.036 "dma_device_id": "system", 00:12:15.036 "dma_device_type": 1 00:12:15.036 }, 00:12:15.036 { 00:12:15.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.036 "dma_device_type": 2 00:12:15.036 } 00:12:15.036 ], 00:12:15.036 "driver_specific": {} 00:12:15.036 } 00:12:15.036 ] 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.036 "name": "Existed_Raid", 00:12:15.036 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:15.036 "strip_size_kb": 0, 00:12:15.036 "state": "configuring", 00:12:15.036 "raid_level": "raid1", 00:12:15.036 "superblock": true, 00:12:15.036 "num_base_bdevs": 4, 00:12:15.036 "num_base_bdevs_discovered": 3, 00:12:15.036 "num_base_bdevs_operational": 4, 00:12:15.036 "base_bdevs_list": [ 00:12:15.036 { 00:12:15.036 "name": "BaseBdev1", 00:12:15.036 "uuid": "54b10551-2ce9-4aaa-ad73-3f75d68230c3", 00:12:15.036 "is_configured": true, 00:12:15.036 "data_offset": 2048, 00:12:15.036 "data_size": 63488 00:12:15.036 }, 00:12:15.036 { 00:12:15.036 "name": null, 00:12:15.036 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:15.036 "is_configured": false, 00:12:15.036 "data_offset": 0, 00:12:15.036 "data_size": 63488 00:12:15.036 }, 00:12:15.036 { 00:12:15.036 "name": "BaseBdev3", 00:12:15.036 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:15.036 "is_configured": true, 00:12:15.036 "data_offset": 2048, 00:12:15.036 "data_size": 63488 00:12:15.036 }, 00:12:15.036 { 00:12:15.036 "name": "BaseBdev4", 00:12:15.036 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:15.036 "is_configured": true, 00:12:15.036 "data_offset": 2048, 00:12:15.036 "data_size": 63488 00:12:15.036 } 00:12:15.036 ] 00:12:15.036 }' 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.036 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.601 [2024-11-26 18:59:06.913024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.601 "name": "Existed_Raid", 00:12:15.601 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:15.601 "strip_size_kb": 0, 00:12:15.601 "state": "configuring", 00:12:15.601 "raid_level": "raid1", 00:12:15.601 "superblock": true, 00:12:15.601 "num_base_bdevs": 4, 00:12:15.601 "num_base_bdevs_discovered": 2, 00:12:15.601 "num_base_bdevs_operational": 4, 00:12:15.601 "base_bdevs_list": [ 00:12:15.601 { 00:12:15.601 "name": "BaseBdev1", 00:12:15.601 "uuid": "54b10551-2ce9-4aaa-ad73-3f75d68230c3", 00:12:15.601 "is_configured": true, 00:12:15.601 "data_offset": 2048, 00:12:15.601 "data_size": 63488 00:12:15.601 }, 00:12:15.601 { 00:12:15.601 "name": null, 00:12:15.601 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:15.601 "is_configured": false, 00:12:15.601 "data_offset": 0, 00:12:15.601 "data_size": 63488 00:12:15.601 }, 00:12:15.601 { 00:12:15.601 "name": null, 00:12:15.601 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:15.601 "is_configured": false, 00:12:15.601 "data_offset": 0, 00:12:15.601 "data_size": 63488 00:12:15.601 }, 00:12:15.601 { 00:12:15.601 "name": "BaseBdev4", 00:12:15.601 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:15.601 "is_configured": true, 00:12:15.601 "data_offset": 2048, 00:12:15.601 "data_size": 63488 00:12:15.601 } 00:12:15.601 ] 00:12:15.601 }' 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.601 18:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.167 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.167 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.168 [2024-11-26 18:59:07.489148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.168 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.426 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.426 "name": "Existed_Raid", 00:12:16.426 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:16.426 "strip_size_kb": 0, 00:12:16.426 "state": "configuring", 00:12:16.426 "raid_level": "raid1", 00:12:16.426 "superblock": true, 00:12:16.426 "num_base_bdevs": 4, 00:12:16.426 "num_base_bdevs_discovered": 3, 00:12:16.426 "num_base_bdevs_operational": 4, 00:12:16.426 "base_bdevs_list": [ 00:12:16.426 { 00:12:16.426 "name": "BaseBdev1", 00:12:16.426 "uuid": "54b10551-2ce9-4aaa-ad73-3f75d68230c3", 00:12:16.426 "is_configured": true, 00:12:16.426 "data_offset": 2048, 00:12:16.426 "data_size": 63488 00:12:16.426 }, 00:12:16.426 { 00:12:16.426 "name": null, 00:12:16.426 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:16.426 "is_configured": false, 00:12:16.426 "data_offset": 0, 00:12:16.426 "data_size": 63488 00:12:16.426 }, 00:12:16.426 { 00:12:16.426 "name": "BaseBdev3", 00:12:16.426 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:16.426 "is_configured": true, 00:12:16.426 "data_offset": 2048, 00:12:16.426 "data_size": 63488 00:12:16.427 }, 00:12:16.427 { 00:12:16.427 "name": "BaseBdev4", 00:12:16.427 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:16.427 "is_configured": true, 00:12:16.427 "data_offset": 2048, 00:12:16.427 "data_size": 63488 00:12:16.427 } 00:12:16.427 ] 00:12:16.427 }' 00:12:16.427 18:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.427 18:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.685 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.685 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:16.685 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.943 [2024-11-26 18:59:08.073351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.943 "name": "Existed_Raid", 00:12:16.943 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:16.943 "strip_size_kb": 0, 00:12:16.943 "state": "configuring", 00:12:16.943 "raid_level": "raid1", 00:12:16.943 "superblock": true, 00:12:16.943 "num_base_bdevs": 4, 00:12:16.943 "num_base_bdevs_discovered": 2, 00:12:16.943 "num_base_bdevs_operational": 4, 00:12:16.943 "base_bdevs_list": [ 00:12:16.943 { 00:12:16.943 "name": null, 00:12:16.943 "uuid": "54b10551-2ce9-4aaa-ad73-3f75d68230c3", 00:12:16.943 "is_configured": false, 00:12:16.943 "data_offset": 0, 00:12:16.943 "data_size": 63488 00:12:16.943 }, 00:12:16.943 { 00:12:16.943 "name": null, 00:12:16.943 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:16.943 "is_configured": false, 00:12:16.943 "data_offset": 0, 00:12:16.943 "data_size": 63488 00:12:16.943 }, 00:12:16.943 { 00:12:16.943 "name": "BaseBdev3", 00:12:16.943 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:16.943 "is_configured": true, 00:12:16.943 "data_offset": 2048, 00:12:16.943 "data_size": 63488 00:12:16.943 }, 00:12:16.943 { 00:12:16.943 "name": "BaseBdev4", 00:12:16.943 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:16.943 "is_configured": true, 00:12:16.943 "data_offset": 2048, 00:12:16.943 "data_size": 63488 00:12:16.943 } 00:12:16.943 ] 00:12:16.943 }' 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.943 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.508 [2024-11-26 18:59:08.728264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.508 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.509 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.509 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.509 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.509 "name": "Existed_Raid", 00:12:17.509 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:17.509 "strip_size_kb": 0, 00:12:17.509 "state": "configuring", 00:12:17.509 "raid_level": "raid1", 00:12:17.509 "superblock": true, 00:12:17.509 "num_base_bdevs": 4, 00:12:17.509 "num_base_bdevs_discovered": 3, 00:12:17.509 "num_base_bdevs_operational": 4, 00:12:17.509 "base_bdevs_list": [ 00:12:17.509 { 00:12:17.509 "name": null, 00:12:17.509 "uuid": "54b10551-2ce9-4aaa-ad73-3f75d68230c3", 00:12:17.509 "is_configured": false, 00:12:17.509 "data_offset": 0, 00:12:17.509 "data_size": 63488 00:12:17.509 }, 00:12:17.509 { 00:12:17.509 "name": "BaseBdev2", 00:12:17.509 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:17.509 "is_configured": true, 00:12:17.509 "data_offset": 2048, 00:12:17.509 "data_size": 63488 00:12:17.509 }, 00:12:17.509 { 00:12:17.509 "name": "BaseBdev3", 00:12:17.509 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:17.509 "is_configured": true, 00:12:17.509 "data_offset": 2048, 00:12:17.509 "data_size": 63488 00:12:17.509 }, 00:12:17.509 { 00:12:17.509 "name": "BaseBdev4", 00:12:17.509 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:17.509 "is_configured": true, 00:12:17.509 "data_offset": 2048, 00:12:17.509 "data_size": 63488 00:12:17.509 } 00:12:17.509 ] 00:12:17.509 }' 00:12:17.509 18:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.509 18:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 54b10551-2ce9-4aaa-ad73-3f75d68230c3 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.074 [2024-11-26 18:59:09.382431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:18.074 [2024-11-26 18:59:09.382751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:18.074 [2024-11-26 18:59:09.382777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:18.074 [2024-11-26 18:59:09.383174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:18.074 NewBaseBdev 00:12:18.074 [2024-11-26 18:59:09.383380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:18.074 [2024-11-26 18:59:09.383397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:18.074 [2024-11-26 18:59:09.383580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.074 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.075 [ 00:12:18.075 { 00:12:18.075 "name": "NewBaseBdev", 00:12:18.075 "aliases": [ 00:12:18.075 "54b10551-2ce9-4aaa-ad73-3f75d68230c3" 00:12:18.075 ], 00:12:18.075 "product_name": "Malloc disk", 00:12:18.075 "block_size": 512, 00:12:18.075 "num_blocks": 65536, 00:12:18.075 "uuid": "54b10551-2ce9-4aaa-ad73-3f75d68230c3", 00:12:18.075 "assigned_rate_limits": { 00:12:18.075 "rw_ios_per_sec": 0, 00:12:18.075 "rw_mbytes_per_sec": 0, 00:12:18.075 "r_mbytes_per_sec": 0, 00:12:18.075 "w_mbytes_per_sec": 0 00:12:18.075 }, 00:12:18.075 "claimed": true, 00:12:18.075 "claim_type": "exclusive_write", 00:12:18.075 "zoned": false, 00:12:18.075 "supported_io_types": { 00:12:18.075 "read": true, 00:12:18.075 "write": true, 00:12:18.075 "unmap": true, 00:12:18.075 "flush": true, 00:12:18.075 "reset": true, 00:12:18.075 "nvme_admin": false, 00:12:18.075 "nvme_io": false, 00:12:18.075 "nvme_io_md": false, 00:12:18.075 "write_zeroes": true, 00:12:18.075 "zcopy": true, 00:12:18.075 "get_zone_info": false, 00:12:18.075 "zone_management": false, 00:12:18.075 "zone_append": false, 00:12:18.075 "compare": false, 00:12:18.075 "compare_and_write": false, 00:12:18.075 "abort": true, 00:12:18.075 "seek_hole": false, 00:12:18.075 "seek_data": false, 00:12:18.075 "copy": true, 00:12:18.075 "nvme_iov_md": false 00:12:18.075 }, 00:12:18.075 "memory_domains": [ 00:12:18.075 { 00:12:18.075 "dma_device_id": "system", 00:12:18.075 "dma_device_type": 1 00:12:18.075 }, 00:12:18.075 { 00:12:18.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.075 "dma_device_type": 2 00:12:18.075 } 00:12:18.075 ], 00:12:18.075 "driver_specific": {} 00:12:18.075 } 00:12:18.075 ] 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.075 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.334 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.334 "name": "Existed_Raid", 00:12:18.334 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:18.334 "strip_size_kb": 0, 00:12:18.334 "state": "online", 00:12:18.334 "raid_level": "raid1", 00:12:18.334 "superblock": true, 00:12:18.334 "num_base_bdevs": 4, 00:12:18.334 "num_base_bdevs_discovered": 4, 00:12:18.334 "num_base_bdevs_operational": 4, 00:12:18.334 "base_bdevs_list": [ 00:12:18.334 { 00:12:18.334 "name": "NewBaseBdev", 00:12:18.334 "uuid": "54b10551-2ce9-4aaa-ad73-3f75d68230c3", 00:12:18.334 "is_configured": true, 00:12:18.334 "data_offset": 2048, 00:12:18.334 "data_size": 63488 00:12:18.334 }, 00:12:18.334 { 00:12:18.334 "name": "BaseBdev2", 00:12:18.334 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:18.334 "is_configured": true, 00:12:18.334 "data_offset": 2048, 00:12:18.334 "data_size": 63488 00:12:18.334 }, 00:12:18.334 { 00:12:18.334 "name": "BaseBdev3", 00:12:18.334 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:18.334 "is_configured": true, 00:12:18.334 "data_offset": 2048, 00:12:18.334 "data_size": 63488 00:12:18.334 }, 00:12:18.334 { 00:12:18.334 "name": "BaseBdev4", 00:12:18.334 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:18.334 "is_configured": true, 00:12:18.334 "data_offset": 2048, 00:12:18.334 "data_size": 63488 00:12:18.334 } 00:12:18.334 ] 00:12:18.334 }' 00:12:18.334 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.334 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.592 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:18.592 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:18.592 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.592 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.592 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.593 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.593 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:18.593 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.593 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.593 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.593 [2024-11-26 18:59:09.931096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.593 18:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.852 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.852 "name": "Existed_Raid", 00:12:18.852 "aliases": [ 00:12:18.852 "3e4a744c-4e23-454b-a1d7-c78ac0414449" 00:12:18.852 ], 00:12:18.852 "product_name": "Raid Volume", 00:12:18.852 "block_size": 512, 00:12:18.852 "num_blocks": 63488, 00:12:18.852 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:18.852 "assigned_rate_limits": { 00:12:18.852 "rw_ios_per_sec": 0, 00:12:18.852 "rw_mbytes_per_sec": 0, 00:12:18.852 "r_mbytes_per_sec": 0, 00:12:18.852 "w_mbytes_per_sec": 0 00:12:18.852 }, 00:12:18.852 "claimed": false, 00:12:18.852 "zoned": false, 00:12:18.852 "supported_io_types": { 00:12:18.852 "read": true, 00:12:18.852 "write": true, 00:12:18.852 "unmap": false, 00:12:18.852 "flush": false, 00:12:18.852 "reset": true, 00:12:18.852 "nvme_admin": false, 00:12:18.852 "nvme_io": false, 00:12:18.852 "nvme_io_md": false, 00:12:18.852 "write_zeroes": true, 00:12:18.852 "zcopy": false, 00:12:18.852 "get_zone_info": false, 00:12:18.852 "zone_management": false, 00:12:18.852 "zone_append": false, 00:12:18.852 "compare": false, 00:12:18.852 "compare_and_write": false, 00:12:18.852 "abort": false, 00:12:18.852 "seek_hole": false, 00:12:18.852 "seek_data": false, 00:12:18.852 "copy": false, 00:12:18.852 "nvme_iov_md": false 00:12:18.852 }, 00:12:18.852 "memory_domains": [ 00:12:18.852 { 00:12:18.852 "dma_device_id": "system", 00:12:18.852 "dma_device_type": 1 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.852 "dma_device_type": 2 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "dma_device_id": "system", 00:12:18.852 "dma_device_type": 1 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.852 "dma_device_type": 2 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "dma_device_id": "system", 00:12:18.852 "dma_device_type": 1 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.852 "dma_device_type": 2 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "dma_device_id": "system", 00:12:18.852 "dma_device_type": 1 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.852 "dma_device_type": 2 00:12:18.852 } 00:12:18.852 ], 00:12:18.852 "driver_specific": { 00:12:18.852 "raid": { 00:12:18.852 "uuid": "3e4a744c-4e23-454b-a1d7-c78ac0414449", 00:12:18.852 "strip_size_kb": 0, 00:12:18.852 "state": "online", 00:12:18.852 "raid_level": "raid1", 00:12:18.852 "superblock": true, 00:12:18.852 "num_base_bdevs": 4, 00:12:18.852 "num_base_bdevs_discovered": 4, 00:12:18.852 "num_base_bdevs_operational": 4, 00:12:18.852 "base_bdevs_list": [ 00:12:18.852 { 00:12:18.852 "name": "NewBaseBdev", 00:12:18.852 "uuid": "54b10551-2ce9-4aaa-ad73-3f75d68230c3", 00:12:18.852 "is_configured": true, 00:12:18.852 "data_offset": 2048, 00:12:18.852 "data_size": 63488 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "name": "BaseBdev2", 00:12:18.852 "uuid": "629defcb-3578-46dd-aad8-48c70908f679", 00:12:18.852 "is_configured": true, 00:12:18.852 "data_offset": 2048, 00:12:18.852 "data_size": 63488 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "name": "BaseBdev3", 00:12:18.852 "uuid": "ee8a2390-77c9-46d3-83b9-18715783a15c", 00:12:18.852 "is_configured": true, 00:12:18.852 "data_offset": 2048, 00:12:18.852 "data_size": 63488 00:12:18.852 }, 00:12:18.852 { 00:12:18.852 "name": "BaseBdev4", 00:12:18.852 "uuid": "7d85089e-89bd-4757-ba02-8d0ddaef13dc", 00:12:18.852 "is_configured": true, 00:12:18.852 "data_offset": 2048, 00:12:18.852 "data_size": 63488 00:12:18.852 } 00:12:18.852 ] 00:12:18.852 } 00:12:18.852 } 00:12:18.852 }' 00:12:18.852 18:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:18.852 BaseBdev2 00:12:18.852 BaseBdev3 00:12:18.852 BaseBdev4' 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.852 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.111 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.111 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 [2024-11-26 18:59:10.282753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.112 [2024-11-26 18:59:10.282965] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.112 [2024-11-26 18:59:10.283097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.112 [2024-11-26 18:59:10.283514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.112 [2024-11-26 18:59:10.283539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74052 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74052 ']' 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74052 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74052 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.112 killing process with pid 74052 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74052' 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74052 00:12:19.112 [2024-11-26 18:59:10.325484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.112 18:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74052 00:12:19.370 [2024-11-26 18:59:10.686500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.778 ************************************ 00:12:20.779 END TEST raid_state_function_test_sb 00:12:20.779 18:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:20.779 00:12:20.779 real 0m12.891s 00:12:20.779 user 0m21.228s 00:12:20.779 sys 0m1.869s 00:12:20.779 18:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.779 18:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.779 ************************************ 00:12:20.779 18:59:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:20.779 18:59:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:20.779 18:59:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.779 18:59:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.779 ************************************ 00:12:20.779 START TEST raid_superblock_test 00:12:20.779 ************************************ 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74735 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74735 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74735 ']' 00:12:20.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.779 18:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.779 [2024-11-26 18:59:11.926839] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:12:20.779 [2024-11-26 18:59:11.927040] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74735 ] 00:12:20.779 [2024-11-26 18:59:12.123441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.038 [2024-11-26 18:59:12.275360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.295 [2024-11-26 18:59:12.496473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.295 [2024-11-26 18:59:12.496714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.552 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.552 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:21.552 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.553 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 malloc1 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 [2024-11-26 18:59:12.932978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:21.812 [2024-11-26 18:59:12.933049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.812 [2024-11-26 18:59:12.933082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:21.812 [2024-11-26 18:59:12.933099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.812 [2024-11-26 18:59:12.935950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.812 [2024-11-26 18:59:12.936015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:21.812 pt1 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 malloc2 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 [2024-11-26 18:59:12.988684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:21.812 [2024-11-26 18:59:12.988756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.812 [2024-11-26 18:59:12.988806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:21.812 [2024-11-26 18:59:12.988821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.812 [2024-11-26 18:59:12.991667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.812 [2024-11-26 18:59:12.991855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:21.812 pt2 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.812 18:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 malloc3 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 [2024-11-26 18:59:13.055042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:21.812 [2024-11-26 18:59:13.055121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.812 [2024-11-26 18:59:13.055155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:21.812 [2024-11-26 18:59:13.055171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.812 [2024-11-26 18:59:13.057945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.812 [2024-11-26 18:59:13.058187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:21.812 pt3 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 malloc4 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 [2024-11-26 18:59:13.110829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:21.812 [2024-11-26 18:59:13.110929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.812 [2024-11-26 18:59:13.110964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:21.812 [2024-11-26 18:59:13.110980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.812 [2024-11-26 18:59:13.113787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.812 [2024-11-26 18:59:13.114027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:21.812 pt4 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.812 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.812 [2024-11-26 18:59:13.122926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:21.812 [2024-11-26 18:59:13.125359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:21.812 [2024-11-26 18:59:13.125595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:21.812 [2024-11-26 18:59:13.125705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:21.812 [2024-11-26 18:59:13.125979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:21.812 [2024-11-26 18:59:13.126003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.812 [2024-11-26 18:59:13.126337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:21.812 [2024-11-26 18:59:13.126561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:21.813 [2024-11-26 18:59:13.126585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:21.813 [2024-11-26 18:59:13.126770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.813 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.071 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.072 "name": "raid_bdev1", 00:12:22.072 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:22.072 "strip_size_kb": 0, 00:12:22.072 "state": "online", 00:12:22.072 "raid_level": "raid1", 00:12:22.072 "superblock": true, 00:12:22.072 "num_base_bdevs": 4, 00:12:22.072 "num_base_bdevs_discovered": 4, 00:12:22.072 "num_base_bdevs_operational": 4, 00:12:22.072 "base_bdevs_list": [ 00:12:22.072 { 00:12:22.072 "name": "pt1", 00:12:22.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.072 "is_configured": true, 00:12:22.072 "data_offset": 2048, 00:12:22.072 "data_size": 63488 00:12:22.072 }, 00:12:22.072 { 00:12:22.072 "name": "pt2", 00:12:22.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.072 "is_configured": true, 00:12:22.072 "data_offset": 2048, 00:12:22.072 "data_size": 63488 00:12:22.072 }, 00:12:22.072 { 00:12:22.072 "name": "pt3", 00:12:22.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.072 "is_configured": true, 00:12:22.072 "data_offset": 2048, 00:12:22.072 "data_size": 63488 00:12:22.072 }, 00:12:22.072 { 00:12:22.072 "name": "pt4", 00:12:22.072 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.072 "is_configured": true, 00:12:22.072 "data_offset": 2048, 00:12:22.072 "data_size": 63488 00:12:22.072 } 00:12:22.072 ] 00:12:22.072 }' 00:12:22.072 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.072 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.330 [2024-11-26 18:59:13.659531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.330 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.589 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.589 "name": "raid_bdev1", 00:12:22.589 "aliases": [ 00:12:22.589 "e0bc8e62-0976-4943-b157-ab144430ef3b" 00:12:22.589 ], 00:12:22.589 "product_name": "Raid Volume", 00:12:22.589 "block_size": 512, 00:12:22.589 "num_blocks": 63488, 00:12:22.589 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:22.589 "assigned_rate_limits": { 00:12:22.589 "rw_ios_per_sec": 0, 00:12:22.589 "rw_mbytes_per_sec": 0, 00:12:22.589 "r_mbytes_per_sec": 0, 00:12:22.589 "w_mbytes_per_sec": 0 00:12:22.589 }, 00:12:22.589 "claimed": false, 00:12:22.589 "zoned": false, 00:12:22.589 "supported_io_types": { 00:12:22.589 "read": true, 00:12:22.589 "write": true, 00:12:22.589 "unmap": false, 00:12:22.589 "flush": false, 00:12:22.589 "reset": true, 00:12:22.589 "nvme_admin": false, 00:12:22.589 "nvme_io": false, 00:12:22.589 "nvme_io_md": false, 00:12:22.589 "write_zeroes": true, 00:12:22.589 "zcopy": false, 00:12:22.589 "get_zone_info": false, 00:12:22.589 "zone_management": false, 00:12:22.589 "zone_append": false, 00:12:22.589 "compare": false, 00:12:22.589 "compare_and_write": false, 00:12:22.589 "abort": false, 00:12:22.589 "seek_hole": false, 00:12:22.589 "seek_data": false, 00:12:22.589 "copy": false, 00:12:22.589 "nvme_iov_md": false 00:12:22.589 }, 00:12:22.589 "memory_domains": [ 00:12:22.589 { 00:12:22.589 "dma_device_id": "system", 00:12:22.589 "dma_device_type": 1 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.589 "dma_device_type": 2 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "dma_device_id": "system", 00:12:22.589 "dma_device_type": 1 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.589 "dma_device_type": 2 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "dma_device_id": "system", 00:12:22.589 "dma_device_type": 1 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.589 "dma_device_type": 2 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "dma_device_id": "system", 00:12:22.589 "dma_device_type": 1 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.589 "dma_device_type": 2 00:12:22.589 } 00:12:22.589 ], 00:12:22.589 "driver_specific": { 00:12:22.589 "raid": { 00:12:22.589 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:22.589 "strip_size_kb": 0, 00:12:22.589 "state": "online", 00:12:22.589 "raid_level": "raid1", 00:12:22.589 "superblock": true, 00:12:22.589 "num_base_bdevs": 4, 00:12:22.589 "num_base_bdevs_discovered": 4, 00:12:22.589 "num_base_bdevs_operational": 4, 00:12:22.589 "base_bdevs_list": [ 00:12:22.589 { 00:12:22.589 "name": "pt1", 00:12:22.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.589 "is_configured": true, 00:12:22.589 "data_offset": 2048, 00:12:22.589 "data_size": 63488 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "name": "pt2", 00:12:22.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.589 "is_configured": true, 00:12:22.589 "data_offset": 2048, 00:12:22.589 "data_size": 63488 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "name": "pt3", 00:12:22.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.589 "is_configured": true, 00:12:22.589 "data_offset": 2048, 00:12:22.589 "data_size": 63488 00:12:22.589 }, 00:12:22.589 { 00:12:22.589 "name": "pt4", 00:12:22.589 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.589 "is_configured": true, 00:12:22.589 "data_offset": 2048, 00:12:22.589 "data_size": 63488 00:12:22.589 } 00:12:22.589 ] 00:12:22.589 } 00:12:22.589 } 00:12:22.589 }' 00:12:22.589 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.589 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:22.589 pt2 00:12:22.589 pt3 00:12:22.590 pt4' 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.590 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.848 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.848 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.848 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.848 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:22.848 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.848 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.848 18:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.848 18:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.848 [2024-11-26 18:59:14.043581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e0bc8e62-0976-4943-b157-ab144430ef3b 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e0bc8e62-0976-4943-b157-ab144430ef3b ']' 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.848 [2024-11-26 18:59:14.107191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.848 [2024-11-26 18:59:14.107345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.848 [2024-11-26 18:59:14.107496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.848 [2024-11-26 18:59:14.107612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.848 [2024-11-26 18:59:14.107637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.848 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:22.849 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.108 [2024-11-26 18:59:14.275256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:23.108 [2024-11-26 18:59:14.277855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:23.108 [2024-11-26 18:59:14.277944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:23.108 [2024-11-26 18:59:14.278015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:23.108 [2024-11-26 18:59:14.278105] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:23.108 [2024-11-26 18:59:14.278177] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:23.108 [2024-11-26 18:59:14.278210] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:23.108 [2024-11-26 18:59:14.278242] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:23.108 [2024-11-26 18:59:14.278264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.108 [2024-11-26 18:59:14.278280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:23.108 request: 00:12:23.108 { 00:12:23.108 "name": "raid_bdev1", 00:12:23.108 "raid_level": "raid1", 00:12:23.108 "base_bdevs": [ 00:12:23.108 "malloc1", 00:12:23.108 "malloc2", 00:12:23.108 "malloc3", 00:12:23.108 "malloc4" 00:12:23.108 ], 00:12:23.108 "superblock": false, 00:12:23.108 "method": "bdev_raid_create", 00:12:23.108 "req_id": 1 00:12:23.108 } 00:12:23.108 Got JSON-RPC error response 00:12:23.108 response: 00:12:23.108 { 00:12:23.108 "code": -17, 00:12:23.108 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:23.108 } 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.108 [2024-11-26 18:59:14.339228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:23.108 [2024-11-26 18:59:14.339303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.108 [2024-11-26 18:59:14.339332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:23.108 [2024-11-26 18:59:14.339349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.108 [2024-11-26 18:59:14.342258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.108 [2024-11-26 18:59:14.342310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:23.108 [2024-11-26 18:59:14.342416] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:23.108 [2024-11-26 18:59:14.342520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:23.108 pt1 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.108 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.108 "name": "raid_bdev1", 00:12:23.108 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:23.108 "strip_size_kb": 0, 00:12:23.108 "state": "configuring", 00:12:23.108 "raid_level": "raid1", 00:12:23.108 "superblock": true, 00:12:23.108 "num_base_bdevs": 4, 00:12:23.108 "num_base_bdevs_discovered": 1, 00:12:23.108 "num_base_bdevs_operational": 4, 00:12:23.108 "base_bdevs_list": [ 00:12:23.108 { 00:12:23.108 "name": "pt1", 00:12:23.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.109 "is_configured": true, 00:12:23.109 "data_offset": 2048, 00:12:23.109 "data_size": 63488 00:12:23.109 }, 00:12:23.109 { 00:12:23.109 "name": null, 00:12:23.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.109 "is_configured": false, 00:12:23.109 "data_offset": 2048, 00:12:23.109 "data_size": 63488 00:12:23.109 }, 00:12:23.109 { 00:12:23.109 "name": null, 00:12:23.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.109 "is_configured": false, 00:12:23.109 "data_offset": 2048, 00:12:23.109 "data_size": 63488 00:12:23.109 }, 00:12:23.109 { 00:12:23.109 "name": null, 00:12:23.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.109 "is_configured": false, 00:12:23.109 "data_offset": 2048, 00:12:23.109 "data_size": 63488 00:12:23.109 } 00:12:23.109 ] 00:12:23.109 }' 00:12:23.109 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.109 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.676 [2024-11-26 18:59:14.831396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:23.676 [2024-11-26 18:59:14.831512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.676 [2024-11-26 18:59:14.831546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:23.676 [2024-11-26 18:59:14.831566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.676 [2024-11-26 18:59:14.832163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.676 [2024-11-26 18:59:14.832206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:23.676 [2024-11-26 18:59:14.832316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:23.676 [2024-11-26 18:59:14.832365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:23.676 pt2 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.676 [2024-11-26 18:59:14.839386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.676 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.676 "name": "raid_bdev1", 00:12:23.676 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:23.676 "strip_size_kb": 0, 00:12:23.676 "state": "configuring", 00:12:23.676 "raid_level": "raid1", 00:12:23.676 "superblock": true, 00:12:23.676 "num_base_bdevs": 4, 00:12:23.676 "num_base_bdevs_discovered": 1, 00:12:23.676 "num_base_bdevs_operational": 4, 00:12:23.676 "base_bdevs_list": [ 00:12:23.676 { 00:12:23.677 "name": "pt1", 00:12:23.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.677 "is_configured": true, 00:12:23.677 "data_offset": 2048, 00:12:23.677 "data_size": 63488 00:12:23.677 }, 00:12:23.677 { 00:12:23.677 "name": null, 00:12:23.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.677 "is_configured": false, 00:12:23.677 "data_offset": 0, 00:12:23.677 "data_size": 63488 00:12:23.677 }, 00:12:23.677 { 00:12:23.677 "name": null, 00:12:23.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.677 "is_configured": false, 00:12:23.677 "data_offset": 2048, 00:12:23.677 "data_size": 63488 00:12:23.677 }, 00:12:23.677 { 00:12:23.677 "name": null, 00:12:23.677 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.677 "is_configured": false, 00:12:23.677 "data_offset": 2048, 00:12:23.677 "data_size": 63488 00:12:23.677 } 00:12:23.677 ] 00:12:23.677 }' 00:12:23.677 18:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.677 18:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.244 [2024-11-26 18:59:15.375543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.244 [2024-11-26 18:59:15.375627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.244 [2024-11-26 18:59:15.375659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:24.244 [2024-11-26 18:59:15.375675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.244 [2024-11-26 18:59:15.376306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.244 [2024-11-26 18:59:15.376387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.244 [2024-11-26 18:59:15.376520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:24.244 [2024-11-26 18:59:15.376554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.244 pt2 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.244 [2024-11-26 18:59:15.383501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:24.244 [2024-11-26 18:59:15.383729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.244 [2024-11-26 18:59:15.383768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:24.244 [2024-11-26 18:59:15.383784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.244 [2024-11-26 18:59:15.384287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.244 [2024-11-26 18:59:15.384321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:24.244 [2024-11-26 18:59:15.384414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:24.244 [2024-11-26 18:59:15.384444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:24.244 pt3 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.244 [2024-11-26 18:59:15.391476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:24.244 [2024-11-26 18:59:15.391532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.244 [2024-11-26 18:59:15.391559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:24.244 [2024-11-26 18:59:15.391573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.244 [2024-11-26 18:59:15.392088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.244 [2024-11-26 18:59:15.392126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:24.244 [2024-11-26 18:59:15.392215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:24.244 [2024-11-26 18:59:15.392251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:24.244 [2024-11-26 18:59:15.392437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:24.244 [2024-11-26 18:59:15.392452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.244 [2024-11-26 18:59:15.392773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:24.244 [2024-11-26 18:59:15.393001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:24.244 [2024-11-26 18:59:15.393023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:24.244 [2024-11-26 18:59:15.393190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.244 pt4 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.244 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.244 "name": "raid_bdev1", 00:12:24.244 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:24.244 "strip_size_kb": 0, 00:12:24.244 "state": "online", 00:12:24.244 "raid_level": "raid1", 00:12:24.244 "superblock": true, 00:12:24.244 "num_base_bdevs": 4, 00:12:24.244 "num_base_bdevs_discovered": 4, 00:12:24.244 "num_base_bdevs_operational": 4, 00:12:24.244 "base_bdevs_list": [ 00:12:24.244 { 00:12:24.244 "name": "pt1", 00:12:24.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.244 "is_configured": true, 00:12:24.244 "data_offset": 2048, 00:12:24.244 "data_size": 63488 00:12:24.244 }, 00:12:24.244 { 00:12:24.244 "name": "pt2", 00:12:24.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.245 "is_configured": true, 00:12:24.245 "data_offset": 2048, 00:12:24.245 "data_size": 63488 00:12:24.245 }, 00:12:24.245 { 00:12:24.245 "name": "pt3", 00:12:24.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.245 "is_configured": true, 00:12:24.245 "data_offset": 2048, 00:12:24.245 "data_size": 63488 00:12:24.245 }, 00:12:24.245 { 00:12:24.245 "name": "pt4", 00:12:24.245 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.245 "is_configured": true, 00:12:24.245 "data_offset": 2048, 00:12:24.245 "data_size": 63488 00:12:24.245 } 00:12:24.245 ] 00:12:24.245 }' 00:12:24.245 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.245 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.813 [2024-11-26 18:59:15.880125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.813 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:24.813 "name": "raid_bdev1", 00:12:24.813 "aliases": [ 00:12:24.813 "e0bc8e62-0976-4943-b157-ab144430ef3b" 00:12:24.813 ], 00:12:24.813 "product_name": "Raid Volume", 00:12:24.813 "block_size": 512, 00:12:24.813 "num_blocks": 63488, 00:12:24.813 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:24.813 "assigned_rate_limits": { 00:12:24.813 "rw_ios_per_sec": 0, 00:12:24.813 "rw_mbytes_per_sec": 0, 00:12:24.813 "r_mbytes_per_sec": 0, 00:12:24.813 "w_mbytes_per_sec": 0 00:12:24.813 }, 00:12:24.813 "claimed": false, 00:12:24.813 "zoned": false, 00:12:24.813 "supported_io_types": { 00:12:24.813 "read": true, 00:12:24.813 "write": true, 00:12:24.813 "unmap": false, 00:12:24.813 "flush": false, 00:12:24.813 "reset": true, 00:12:24.813 "nvme_admin": false, 00:12:24.813 "nvme_io": false, 00:12:24.813 "nvme_io_md": false, 00:12:24.813 "write_zeroes": true, 00:12:24.813 "zcopy": false, 00:12:24.813 "get_zone_info": false, 00:12:24.813 "zone_management": false, 00:12:24.813 "zone_append": false, 00:12:24.813 "compare": false, 00:12:24.813 "compare_and_write": false, 00:12:24.813 "abort": false, 00:12:24.813 "seek_hole": false, 00:12:24.813 "seek_data": false, 00:12:24.813 "copy": false, 00:12:24.814 "nvme_iov_md": false 00:12:24.814 }, 00:12:24.814 "memory_domains": [ 00:12:24.814 { 00:12:24.814 "dma_device_id": "system", 00:12:24.814 "dma_device_type": 1 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.814 "dma_device_type": 2 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "dma_device_id": "system", 00:12:24.814 "dma_device_type": 1 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.814 "dma_device_type": 2 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "dma_device_id": "system", 00:12:24.814 "dma_device_type": 1 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.814 "dma_device_type": 2 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "dma_device_id": "system", 00:12:24.814 "dma_device_type": 1 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.814 "dma_device_type": 2 00:12:24.814 } 00:12:24.814 ], 00:12:24.814 "driver_specific": { 00:12:24.814 "raid": { 00:12:24.814 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:24.814 "strip_size_kb": 0, 00:12:24.814 "state": "online", 00:12:24.814 "raid_level": "raid1", 00:12:24.814 "superblock": true, 00:12:24.814 "num_base_bdevs": 4, 00:12:24.814 "num_base_bdevs_discovered": 4, 00:12:24.814 "num_base_bdevs_operational": 4, 00:12:24.814 "base_bdevs_list": [ 00:12:24.814 { 00:12:24.814 "name": "pt1", 00:12:24.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.814 "is_configured": true, 00:12:24.814 "data_offset": 2048, 00:12:24.814 "data_size": 63488 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "name": "pt2", 00:12:24.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.814 "is_configured": true, 00:12:24.814 "data_offset": 2048, 00:12:24.814 "data_size": 63488 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "name": "pt3", 00:12:24.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.814 "is_configured": true, 00:12:24.814 "data_offset": 2048, 00:12:24.814 "data_size": 63488 00:12:24.814 }, 00:12:24.814 { 00:12:24.814 "name": "pt4", 00:12:24.814 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.814 "is_configured": true, 00:12:24.814 "data_offset": 2048, 00:12:24.814 "data_size": 63488 00:12:24.814 } 00:12:24.814 ] 00:12:24.814 } 00:12:24.814 } 00:12:24.814 }' 00:12:24.814 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:24.814 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:24.814 pt2 00:12:24.814 pt3 00:12:24.814 pt4' 00:12:24.814 18:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.814 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:25.074 [2024-11-26 18:59:16.260169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e0bc8e62-0976-4943-b157-ab144430ef3b '!=' e0bc8e62-0976-4943-b157-ab144430ef3b ']' 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.074 [2024-11-26 18:59:16.299846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.074 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.075 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.075 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.075 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.075 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.075 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.075 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.075 "name": "raid_bdev1", 00:12:25.075 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:25.075 "strip_size_kb": 0, 00:12:25.075 "state": "online", 00:12:25.075 "raid_level": "raid1", 00:12:25.075 "superblock": true, 00:12:25.075 "num_base_bdevs": 4, 00:12:25.075 "num_base_bdevs_discovered": 3, 00:12:25.075 "num_base_bdevs_operational": 3, 00:12:25.075 "base_bdevs_list": [ 00:12:25.075 { 00:12:25.075 "name": null, 00:12:25.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.075 "is_configured": false, 00:12:25.075 "data_offset": 0, 00:12:25.075 "data_size": 63488 00:12:25.075 }, 00:12:25.075 { 00:12:25.075 "name": "pt2", 00:12:25.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.075 "is_configured": true, 00:12:25.075 "data_offset": 2048, 00:12:25.075 "data_size": 63488 00:12:25.075 }, 00:12:25.075 { 00:12:25.075 "name": "pt3", 00:12:25.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.075 "is_configured": true, 00:12:25.075 "data_offset": 2048, 00:12:25.075 "data_size": 63488 00:12:25.075 }, 00:12:25.075 { 00:12:25.075 "name": "pt4", 00:12:25.075 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.075 "is_configured": true, 00:12:25.075 "data_offset": 2048, 00:12:25.075 "data_size": 63488 00:12:25.075 } 00:12:25.075 ] 00:12:25.075 }' 00:12:25.075 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.075 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.644 [2024-11-26 18:59:16.855940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.644 [2024-11-26 18:59:16.855980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.644 [2024-11-26 18:59:16.856098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.644 [2024-11-26 18:59:16.856207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.644 [2024-11-26 18:59:16.856223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.644 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.645 [2024-11-26 18:59:16.935936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.645 [2024-11-26 18:59:16.936142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.645 [2024-11-26 18:59:16.936184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:25.645 [2024-11-26 18:59:16.936207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.645 [2024-11-26 18:59:16.939172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.645 [2024-11-26 18:59:16.939343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.645 [2024-11-26 18:59:16.939489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:25.645 [2024-11-26 18:59:16.939555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.645 pt2 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.645 "name": "raid_bdev1", 00:12:25.645 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:25.645 "strip_size_kb": 0, 00:12:25.645 "state": "configuring", 00:12:25.645 "raid_level": "raid1", 00:12:25.645 "superblock": true, 00:12:25.645 "num_base_bdevs": 4, 00:12:25.645 "num_base_bdevs_discovered": 1, 00:12:25.645 "num_base_bdevs_operational": 3, 00:12:25.645 "base_bdevs_list": [ 00:12:25.645 { 00:12:25.645 "name": null, 00:12:25.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.645 "is_configured": false, 00:12:25.645 "data_offset": 2048, 00:12:25.645 "data_size": 63488 00:12:25.645 }, 00:12:25.645 { 00:12:25.645 "name": "pt2", 00:12:25.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.645 "is_configured": true, 00:12:25.645 "data_offset": 2048, 00:12:25.645 "data_size": 63488 00:12:25.645 }, 00:12:25.645 { 00:12:25.645 "name": null, 00:12:25.645 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.645 "is_configured": false, 00:12:25.645 "data_offset": 2048, 00:12:25.645 "data_size": 63488 00:12:25.645 }, 00:12:25.645 { 00:12:25.645 "name": null, 00:12:25.645 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.645 "is_configured": false, 00:12:25.645 "data_offset": 2048, 00:12:25.645 "data_size": 63488 00:12:25.645 } 00:12:25.645 ] 00:12:25.645 }' 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.645 18:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.260 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:26.260 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.261 [2024-11-26 18:59:17.480161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.261 [2024-11-26 18:59:17.480246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.261 [2024-11-26 18:59:17.480283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:26.261 [2024-11-26 18:59:17.480299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.261 [2024-11-26 18:59:17.480917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.261 [2024-11-26 18:59:17.480949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.261 [2024-11-26 18:59:17.481073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:26.261 [2024-11-26 18:59:17.481107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.261 pt3 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.261 "name": "raid_bdev1", 00:12:26.261 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:26.261 "strip_size_kb": 0, 00:12:26.261 "state": "configuring", 00:12:26.261 "raid_level": "raid1", 00:12:26.261 "superblock": true, 00:12:26.261 "num_base_bdevs": 4, 00:12:26.261 "num_base_bdevs_discovered": 2, 00:12:26.261 "num_base_bdevs_operational": 3, 00:12:26.261 "base_bdevs_list": [ 00:12:26.261 { 00:12:26.261 "name": null, 00:12:26.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.261 "is_configured": false, 00:12:26.261 "data_offset": 2048, 00:12:26.261 "data_size": 63488 00:12:26.261 }, 00:12:26.261 { 00:12:26.261 "name": "pt2", 00:12:26.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.261 "is_configured": true, 00:12:26.261 "data_offset": 2048, 00:12:26.261 "data_size": 63488 00:12:26.261 }, 00:12:26.261 { 00:12:26.261 "name": "pt3", 00:12:26.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.261 "is_configured": true, 00:12:26.261 "data_offset": 2048, 00:12:26.261 "data_size": 63488 00:12:26.261 }, 00:12:26.261 { 00:12:26.261 "name": null, 00:12:26.261 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.261 "is_configured": false, 00:12:26.261 "data_offset": 2048, 00:12:26.261 "data_size": 63488 00:12:26.261 } 00:12:26.261 ] 00:12:26.261 }' 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.261 18:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.854 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:26.854 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:26.854 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:26.854 18:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:26.854 18:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.854 18:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.854 [2024-11-26 18:59:17.996353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:26.854 [2024-11-26 18:59:17.996446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.854 [2024-11-26 18:59:17.996488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:26.854 [2024-11-26 18:59:17.996504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.854 [2024-11-26 18:59:17.997114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.854 [2024-11-26 18:59:17.997140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:26.854 [2024-11-26 18:59:17.997254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:26.854 [2024-11-26 18:59:17.997293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:26.854 [2024-11-26 18:59:17.997465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:26.854 [2024-11-26 18:59:17.997487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.854 [2024-11-26 18:59:17.997799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:26.854 [2024-11-26 18:59:17.998021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:26.854 [2024-11-26 18:59:17.998045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:26.854 [2024-11-26 18:59:17.998216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.854 pt4 00:12:26.854 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.854 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.854 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.854 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.854 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.854 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.854 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.854 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.855 "name": "raid_bdev1", 00:12:26.855 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:26.855 "strip_size_kb": 0, 00:12:26.855 "state": "online", 00:12:26.855 "raid_level": "raid1", 00:12:26.855 "superblock": true, 00:12:26.855 "num_base_bdevs": 4, 00:12:26.855 "num_base_bdevs_discovered": 3, 00:12:26.855 "num_base_bdevs_operational": 3, 00:12:26.855 "base_bdevs_list": [ 00:12:26.855 { 00:12:26.855 "name": null, 00:12:26.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.855 "is_configured": false, 00:12:26.855 "data_offset": 2048, 00:12:26.855 "data_size": 63488 00:12:26.855 }, 00:12:26.855 { 00:12:26.855 "name": "pt2", 00:12:26.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.855 "is_configured": true, 00:12:26.855 "data_offset": 2048, 00:12:26.855 "data_size": 63488 00:12:26.855 }, 00:12:26.855 { 00:12:26.855 "name": "pt3", 00:12:26.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.855 "is_configured": true, 00:12:26.855 "data_offset": 2048, 00:12:26.855 "data_size": 63488 00:12:26.855 }, 00:12:26.855 { 00:12:26.855 "name": "pt4", 00:12:26.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.855 "is_configured": true, 00:12:26.855 "data_offset": 2048, 00:12:26.855 "data_size": 63488 00:12:26.855 } 00:12:26.855 ] 00:12:26.855 }' 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.855 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.422 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.422 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.423 [2024-11-26 18:59:18.532416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.423 [2024-11-26 18:59:18.532586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.423 [2024-11-26 18:59:18.532716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.423 [2024-11-26 18:59:18.532820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.423 [2024-11-26 18:59:18.532841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.423 [2024-11-26 18:59:18.604427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:27.423 [2024-11-26 18:59:18.604512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.423 [2024-11-26 18:59:18.604542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:27.423 [2024-11-26 18:59:18.604563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.423 [2024-11-26 18:59:18.607541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.423 [2024-11-26 18:59:18.607592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:27.423 [2024-11-26 18:59:18.607707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:27.423 [2024-11-26 18:59:18.607774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:27.423 [2024-11-26 18:59:18.607966] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:27.423 [2024-11-26 18:59:18.607998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.423 [2024-11-26 18:59:18.608020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:27.423 [2024-11-26 18:59:18.608098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.423 [2024-11-26 18:59:18.608255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:27.423 pt1 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.423 "name": "raid_bdev1", 00:12:27.423 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:27.423 "strip_size_kb": 0, 00:12:27.423 "state": "configuring", 00:12:27.423 "raid_level": "raid1", 00:12:27.423 "superblock": true, 00:12:27.423 "num_base_bdevs": 4, 00:12:27.423 "num_base_bdevs_discovered": 2, 00:12:27.423 "num_base_bdevs_operational": 3, 00:12:27.423 "base_bdevs_list": [ 00:12:27.423 { 00:12:27.423 "name": null, 00:12:27.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.423 "is_configured": false, 00:12:27.423 "data_offset": 2048, 00:12:27.423 "data_size": 63488 00:12:27.423 }, 00:12:27.423 { 00:12:27.423 "name": "pt2", 00:12:27.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.423 "is_configured": true, 00:12:27.423 "data_offset": 2048, 00:12:27.423 "data_size": 63488 00:12:27.423 }, 00:12:27.423 { 00:12:27.423 "name": "pt3", 00:12:27.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.423 "is_configured": true, 00:12:27.423 "data_offset": 2048, 00:12:27.423 "data_size": 63488 00:12:27.423 }, 00:12:27.423 { 00:12:27.423 "name": null, 00:12:27.423 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.423 "is_configured": false, 00:12:27.423 "data_offset": 2048, 00:12:27.423 "data_size": 63488 00:12:27.423 } 00:12:27.423 ] 00:12:27.423 }' 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.423 18:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.990 [2024-11-26 18:59:19.168625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:27.990 [2024-11-26 18:59:19.168710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.990 [2024-11-26 18:59:19.168748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:27.990 [2024-11-26 18:59:19.168763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.990 [2024-11-26 18:59:19.169357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.990 [2024-11-26 18:59:19.169390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:27.990 [2024-11-26 18:59:19.169506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:27.990 [2024-11-26 18:59:19.169546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:27.990 [2024-11-26 18:59:19.169721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:27.990 [2024-11-26 18:59:19.169737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.990 [2024-11-26 18:59:19.170076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:27.990 [2024-11-26 18:59:19.170420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:27.990 [2024-11-26 18:59:19.170450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:27.990 [2024-11-26 18:59:19.170635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.990 pt4 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.990 "name": "raid_bdev1", 00:12:27.990 "uuid": "e0bc8e62-0976-4943-b157-ab144430ef3b", 00:12:27.990 "strip_size_kb": 0, 00:12:27.990 "state": "online", 00:12:27.990 "raid_level": "raid1", 00:12:27.990 "superblock": true, 00:12:27.990 "num_base_bdevs": 4, 00:12:27.990 "num_base_bdevs_discovered": 3, 00:12:27.990 "num_base_bdevs_operational": 3, 00:12:27.990 "base_bdevs_list": [ 00:12:27.990 { 00:12:27.990 "name": null, 00:12:27.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.990 "is_configured": false, 00:12:27.990 "data_offset": 2048, 00:12:27.990 "data_size": 63488 00:12:27.990 }, 00:12:27.990 { 00:12:27.990 "name": "pt2", 00:12:27.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.990 "is_configured": true, 00:12:27.990 "data_offset": 2048, 00:12:27.990 "data_size": 63488 00:12:27.990 }, 00:12:27.990 { 00:12:27.990 "name": "pt3", 00:12:27.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.990 "is_configured": true, 00:12:27.990 "data_offset": 2048, 00:12:27.990 "data_size": 63488 00:12:27.990 }, 00:12:27.990 { 00:12:27.990 "name": "pt4", 00:12:27.990 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.990 "is_configured": true, 00:12:27.990 "data_offset": 2048, 00:12:27.990 "data_size": 63488 00:12:27.990 } 00:12:27.990 ] 00:12:27.990 }' 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.990 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.556 [2024-11-26 18:59:19.729151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e0bc8e62-0976-4943-b157-ab144430ef3b '!=' e0bc8e62-0976-4943-b157-ab144430ef3b ']' 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74735 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74735 ']' 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74735 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74735 00:12:28.556 killing process with pid 74735 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74735' 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74735 00:12:28.556 [2024-11-26 18:59:19.797754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.556 18:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74735 00:12:28.556 [2024-11-26 18:59:19.797875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.556 [2024-11-26 18:59:19.797992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.556 [2024-11-26 18:59:19.798015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:28.815 [2024-11-26 18:59:20.156208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.193 18:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:30.193 00:12:30.193 real 0m9.404s 00:12:30.193 user 0m15.391s 00:12:30.193 sys 0m1.400s 00:12:30.193 18:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.193 ************************************ 00:12:30.193 END TEST raid_superblock_test 00:12:30.193 ************************************ 00:12:30.193 18:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 18:59:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:30.193 18:59:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:30.193 18:59:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.193 18:59:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 ************************************ 00:12:30.193 START TEST raid_read_error_test 00:12:30.193 ************************************ 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IJ3SVk6DtG 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75228 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75228 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75228 ']' 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.193 18:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 [2024-11-26 18:59:21.381392] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:12:30.193 [2024-11-26 18:59:21.381746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75228 ] 00:12:30.193 [2024-11-26 18:59:21.555538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.452 [2024-11-26 18:59:21.688987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.710 [2024-11-26 18:59:21.895085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.710 [2024-11-26 18:59:21.895361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.277 BaseBdev1_malloc 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.277 true 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.277 [2024-11-26 18:59:22.440811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:31.277 [2024-11-26 18:59:22.441038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.277 [2024-11-26 18:59:22.441092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:31.277 [2024-11-26 18:59:22.441113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.277 [2024-11-26 18:59:22.444185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.277 [2024-11-26 18:59:22.444358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:31.277 BaseBdev1 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.277 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 BaseBdev2_malloc 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 true 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 [2024-11-26 18:59:22.501834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:31.278 [2024-11-26 18:59:22.501932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.278 [2024-11-26 18:59:22.501971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:31.278 [2024-11-26 18:59:22.501989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.278 [2024-11-26 18:59:22.505038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.278 [2024-11-26 18:59:22.505089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:31.278 BaseBdev2 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 BaseBdev3_malloc 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 true 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 [2024-11-26 18:59:22.570705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:31.278 [2024-11-26 18:59:22.570774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.278 [2024-11-26 18:59:22.570805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:31.278 [2024-11-26 18:59:22.570823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.278 [2024-11-26 18:59:22.573870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.278 [2024-11-26 18:59:22.573938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:31.278 BaseBdev3 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 BaseBdev4_malloc 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 true 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 [2024-11-26 18:59:22.631130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:31.278 [2024-11-26 18:59:22.631203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.278 [2024-11-26 18:59:22.631235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:31.278 [2024-11-26 18:59:22.631254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.278 [2024-11-26 18:59:22.634261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.278 [2024-11-26 18:59:22.634315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:31.278 BaseBdev4 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.278 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.278 [2024-11-26 18:59:22.639219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.278 [2024-11-26 18:59:22.641769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.278 [2024-11-26 18:59:22.641882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.278 [2024-11-26 18:59:22.642012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.537 [2024-11-26 18:59:22.642333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:31.537 [2024-11-26 18:59:22.642366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.537 [2024-11-26 18:59:22.642720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:31.537 [2024-11-26 18:59:22.642971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:31.537 [2024-11-26 18:59:22.642994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:31.537 [2024-11-26 18:59:22.643261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.537 "name": "raid_bdev1", 00:12:31.537 "uuid": "9b3e9cdb-bf23-4cc5-95bc-efb3c1bfcaea", 00:12:31.537 "strip_size_kb": 0, 00:12:31.537 "state": "online", 00:12:31.537 "raid_level": "raid1", 00:12:31.537 "superblock": true, 00:12:31.537 "num_base_bdevs": 4, 00:12:31.537 "num_base_bdevs_discovered": 4, 00:12:31.537 "num_base_bdevs_operational": 4, 00:12:31.537 "base_bdevs_list": [ 00:12:31.537 { 00:12:31.537 "name": "BaseBdev1", 00:12:31.537 "uuid": "e9951035-f03b-56cf-afea-8d9e30e40b65", 00:12:31.537 "is_configured": true, 00:12:31.537 "data_offset": 2048, 00:12:31.537 "data_size": 63488 00:12:31.537 }, 00:12:31.537 { 00:12:31.537 "name": "BaseBdev2", 00:12:31.537 "uuid": "5a805366-bc8c-5b27-b7f2-673dc9ec79ad", 00:12:31.537 "is_configured": true, 00:12:31.537 "data_offset": 2048, 00:12:31.537 "data_size": 63488 00:12:31.537 }, 00:12:31.537 { 00:12:31.537 "name": "BaseBdev3", 00:12:31.537 "uuid": "36191e96-62f5-5bdc-9914-b8b4cbf06d04", 00:12:31.537 "is_configured": true, 00:12:31.537 "data_offset": 2048, 00:12:31.537 "data_size": 63488 00:12:31.537 }, 00:12:31.537 { 00:12:31.537 "name": "BaseBdev4", 00:12:31.537 "uuid": "57f4f5e2-9467-59bf-a1df-95f40a570633", 00:12:31.537 "is_configured": true, 00:12:31.537 "data_offset": 2048, 00:12:31.537 "data_size": 63488 00:12:31.537 } 00:12:31.537 ] 00:12:31.537 }' 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.537 18:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.104 18:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:32.104 18:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:32.104 [2024-11-26 18:59:23.292811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.040 "name": "raid_bdev1", 00:12:33.040 "uuid": "9b3e9cdb-bf23-4cc5-95bc-efb3c1bfcaea", 00:12:33.040 "strip_size_kb": 0, 00:12:33.040 "state": "online", 00:12:33.040 "raid_level": "raid1", 00:12:33.040 "superblock": true, 00:12:33.040 "num_base_bdevs": 4, 00:12:33.040 "num_base_bdevs_discovered": 4, 00:12:33.040 "num_base_bdevs_operational": 4, 00:12:33.040 "base_bdevs_list": [ 00:12:33.040 { 00:12:33.040 "name": "BaseBdev1", 00:12:33.040 "uuid": "e9951035-f03b-56cf-afea-8d9e30e40b65", 00:12:33.040 "is_configured": true, 00:12:33.040 "data_offset": 2048, 00:12:33.040 "data_size": 63488 00:12:33.040 }, 00:12:33.040 { 00:12:33.040 "name": "BaseBdev2", 00:12:33.040 "uuid": "5a805366-bc8c-5b27-b7f2-673dc9ec79ad", 00:12:33.040 "is_configured": true, 00:12:33.040 "data_offset": 2048, 00:12:33.040 "data_size": 63488 00:12:33.040 }, 00:12:33.040 { 00:12:33.040 "name": "BaseBdev3", 00:12:33.040 "uuid": "36191e96-62f5-5bdc-9914-b8b4cbf06d04", 00:12:33.040 "is_configured": true, 00:12:33.040 "data_offset": 2048, 00:12:33.040 "data_size": 63488 00:12:33.040 }, 00:12:33.040 { 00:12:33.040 "name": "BaseBdev4", 00:12:33.040 "uuid": "57f4f5e2-9467-59bf-a1df-95f40a570633", 00:12:33.040 "is_configured": true, 00:12:33.040 "data_offset": 2048, 00:12:33.040 "data_size": 63488 00:12:33.040 } 00:12:33.040 ] 00:12:33.040 }' 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.040 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.609 [2024-11-26 18:59:24.708536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.609 [2024-11-26 18:59:24.708583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.609 [2024-11-26 18:59:24.711909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.609 [2024-11-26 18:59:24.711988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.609 [2024-11-26 18:59:24.712155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.609 [2024-11-26 18:59:24.712186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:33.609 { 00:12:33.609 "results": [ 00:12:33.609 { 00:12:33.609 "job": "raid_bdev1", 00:12:33.609 "core_mask": "0x1", 00:12:33.609 "workload": "randrw", 00:12:33.609 "percentage": 50, 00:12:33.609 "status": "finished", 00:12:33.609 "queue_depth": 1, 00:12:33.609 "io_size": 131072, 00:12:33.609 "runtime": 1.412131, 00:12:33.609 "iops": 7443.360424776455, 00:12:33.609 "mibps": 930.4200530970569, 00:12:33.609 "io_failed": 0, 00:12:33.609 "io_timeout": 0, 00:12:33.609 "avg_latency_us": 130.17581512009065, 00:12:33.609 "min_latency_us": 43.28727272727273, 00:12:33.609 "max_latency_us": 1817.1345454545456 00:12:33.609 } 00:12:33.609 ], 00:12:33.609 "core_count": 1 00:12:33.609 } 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75228 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75228 ']' 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75228 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75228 00:12:33.609 killing process with pid 75228 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75228' 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75228 00:12:33.609 18:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75228 00:12:33.609 [2024-11-26 18:59:24.747960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.868 [2024-11-26 18:59:25.045406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IJ3SVk6DtG 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:34.805 00:12:34.805 real 0m4.885s 00:12:34.805 user 0m6.031s 00:12:34.805 sys 0m0.584s 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.805 ************************************ 00:12:34.805 END TEST raid_read_error_test 00:12:34.805 ************************************ 00:12:34.805 18:59:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.064 18:59:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:35.064 18:59:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:35.064 18:59:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.064 18:59:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.064 ************************************ 00:12:35.064 START TEST raid_write_error_test 00:12:35.064 ************************************ 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.erUUVptdyA 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75379 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75379 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75379 ']' 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.064 18:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.064 [2024-11-26 18:59:26.310387] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:12:35.064 [2024-11-26 18:59:26.310553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75379 ] 00:12:35.322 [2024-11-26 18:59:26.483779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.322 [2024-11-26 18:59:26.612616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.581 [2024-11-26 18:59:26.816288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.581 [2024-11-26 18:59:26.816369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.148 BaseBdev1_malloc 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.148 true 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.148 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.148 [2024-11-26 18:59:27.410008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:36.148 [2024-11-26 18:59:27.410077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.148 [2024-11-26 18:59:27.410108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:36.149 [2024-11-26 18:59:27.410127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.149 [2024-11-26 18:59:27.413020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.149 [2024-11-26 18:59:27.413073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.149 BaseBdev1 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.149 BaseBdev2_malloc 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.149 true 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.149 [2024-11-26 18:59:27.478296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:36.149 [2024-11-26 18:59:27.478397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.149 [2024-11-26 18:59:27.478430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:36.149 [2024-11-26 18:59:27.478455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.149 [2024-11-26 18:59:27.481562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.149 [2024-11-26 18:59:27.481623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:36.149 BaseBdev2 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.149 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.408 BaseBdev3_malloc 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.408 true 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.408 [2024-11-26 18:59:27.561580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:36.408 [2024-11-26 18:59:27.561661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.408 [2024-11-26 18:59:27.561695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:36.408 [2024-11-26 18:59:27.561716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.408 [2024-11-26 18:59:27.564770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.408 [2024-11-26 18:59:27.564821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:36.408 BaseBdev3 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.408 BaseBdev4_malloc 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.408 true 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.408 [2024-11-26 18:59:27.629836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:36.408 [2024-11-26 18:59:27.629923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.408 [2024-11-26 18:59:27.629958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:36.408 [2024-11-26 18:59:27.629977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.408 [2024-11-26 18:59:27.632891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.408 [2024-11-26 18:59:27.632957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:36.408 BaseBdev4 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.408 [2024-11-26 18:59:27.641973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.408 [2024-11-26 18:59:27.644505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.408 [2024-11-26 18:59:27.644619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.408 [2024-11-26 18:59:27.644718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.408 [2024-11-26 18:59:27.645054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:36.408 [2024-11-26 18:59:27.645081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.408 [2024-11-26 18:59:27.645427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:36.408 [2024-11-26 18:59:27.645656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:36.408 [2024-11-26 18:59:27.645673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:36.408 [2024-11-26 18:59:27.645950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.408 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.409 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.409 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.409 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.409 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.409 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.409 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.409 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.409 "name": "raid_bdev1", 00:12:36.409 "uuid": "6a7181ee-a22d-4556-a1b7-149284c5ec86", 00:12:36.409 "strip_size_kb": 0, 00:12:36.409 "state": "online", 00:12:36.409 "raid_level": "raid1", 00:12:36.409 "superblock": true, 00:12:36.409 "num_base_bdevs": 4, 00:12:36.409 "num_base_bdevs_discovered": 4, 00:12:36.409 "num_base_bdevs_operational": 4, 00:12:36.409 "base_bdevs_list": [ 00:12:36.409 { 00:12:36.409 "name": "BaseBdev1", 00:12:36.409 "uuid": "2f13b9de-9bc7-5b08-bd4e-b783f7bf56db", 00:12:36.409 "is_configured": true, 00:12:36.409 "data_offset": 2048, 00:12:36.409 "data_size": 63488 00:12:36.409 }, 00:12:36.409 { 00:12:36.409 "name": "BaseBdev2", 00:12:36.409 "uuid": "72a4659c-db3d-51b3-aac4-982fd618e8cb", 00:12:36.409 "is_configured": true, 00:12:36.409 "data_offset": 2048, 00:12:36.409 "data_size": 63488 00:12:36.409 }, 00:12:36.409 { 00:12:36.409 "name": "BaseBdev3", 00:12:36.409 "uuid": "cf77344c-51ee-54c4-b4d2-755045756b5e", 00:12:36.409 "is_configured": true, 00:12:36.409 "data_offset": 2048, 00:12:36.409 "data_size": 63488 00:12:36.409 }, 00:12:36.409 { 00:12:36.409 "name": "BaseBdev4", 00:12:36.409 "uuid": "3a156d28-2216-594b-8deb-32c429aa242f", 00:12:36.409 "is_configured": true, 00:12:36.409 "data_offset": 2048, 00:12:36.409 "data_size": 63488 00:12:36.409 } 00:12:36.409 ] 00:12:36.409 }' 00:12:36.409 18:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.409 18:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.995 18:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:36.995 18:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:36.995 [2024-11-26 18:59:28.343730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.956 [2024-11-26 18:59:29.189476] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:37.956 [2024-11-26 18:59:29.189548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.956 [2024-11-26 18:59:29.189832] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.956 "name": "raid_bdev1", 00:12:37.956 "uuid": "6a7181ee-a22d-4556-a1b7-149284c5ec86", 00:12:37.956 "strip_size_kb": 0, 00:12:37.956 "state": "online", 00:12:37.956 "raid_level": "raid1", 00:12:37.956 "superblock": true, 00:12:37.956 "num_base_bdevs": 4, 00:12:37.956 "num_base_bdevs_discovered": 3, 00:12:37.956 "num_base_bdevs_operational": 3, 00:12:37.956 "base_bdevs_list": [ 00:12:37.956 { 00:12:37.956 "name": null, 00:12:37.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.956 "is_configured": false, 00:12:37.956 "data_offset": 0, 00:12:37.956 "data_size": 63488 00:12:37.956 }, 00:12:37.956 { 00:12:37.956 "name": "BaseBdev2", 00:12:37.956 "uuid": "72a4659c-db3d-51b3-aac4-982fd618e8cb", 00:12:37.956 "is_configured": true, 00:12:37.956 "data_offset": 2048, 00:12:37.956 "data_size": 63488 00:12:37.956 }, 00:12:37.956 { 00:12:37.956 "name": "BaseBdev3", 00:12:37.956 "uuid": "cf77344c-51ee-54c4-b4d2-755045756b5e", 00:12:37.956 "is_configured": true, 00:12:37.956 "data_offset": 2048, 00:12:37.956 "data_size": 63488 00:12:37.956 }, 00:12:37.956 { 00:12:37.956 "name": "BaseBdev4", 00:12:37.956 "uuid": "3a156d28-2216-594b-8deb-32c429aa242f", 00:12:37.956 "is_configured": true, 00:12:37.956 "data_offset": 2048, 00:12:37.956 "data_size": 63488 00:12:37.956 } 00:12:37.956 ] 00:12:37.956 }' 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.956 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.526 [2024-11-26 18:59:29.702821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.526 [2024-11-26 18:59:29.702863] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.526 [2024-11-26 18:59:29.706276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.526 [2024-11-26 18:59:29.706347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.526 [2024-11-26 18:59:29.706489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.526 [2024-11-26 18:59:29.706505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:38.526 { 00:12:38.526 "results": [ 00:12:38.526 { 00:12:38.526 "job": "raid_bdev1", 00:12:38.526 "core_mask": "0x1", 00:12:38.526 "workload": "randrw", 00:12:38.526 "percentage": 50, 00:12:38.526 "status": "finished", 00:12:38.526 "queue_depth": 1, 00:12:38.526 "io_size": 131072, 00:12:38.526 "runtime": 1.356516, 00:12:38.526 "iops": 7754.42383281878, 00:12:38.526 "mibps": 969.3029791023475, 00:12:38.526 "io_failed": 0, 00:12:38.526 "io_timeout": 0, 00:12:38.526 "avg_latency_us": 124.63907215514784, 00:12:38.526 "min_latency_us": 43.52, 00:12:38.526 "max_latency_us": 2055.447272727273 00:12:38.526 } 00:12:38.526 ], 00:12:38.526 "core_count": 1 00:12:38.526 } 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75379 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75379 ']' 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75379 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75379 00:12:38.526 killing process with pid 75379 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75379' 00:12:38.526 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75379 00:12:38.527 18:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75379 00:12:38.527 [2024-11-26 18:59:29.744906] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.785 [2024-11-26 18:59:30.044794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.erUUVptdyA 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:40.163 00:12:40.163 real 0m4.973s 00:12:40.163 user 0m6.179s 00:12:40.163 sys 0m0.611s 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.163 18:59:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.163 ************************************ 00:12:40.163 END TEST raid_write_error_test 00:12:40.163 ************************************ 00:12:40.163 18:59:31 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:40.163 18:59:31 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:40.163 18:59:31 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:40.163 18:59:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:40.163 18:59:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.163 18:59:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.163 ************************************ 00:12:40.163 START TEST raid_rebuild_test 00:12:40.163 ************************************ 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:40.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75523 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75523 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75523 ']' 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.163 18:59:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.163 [2024-11-26 18:59:31.365756] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:12:40.163 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:40.163 Zero copy mechanism will not be used. 00:12:40.163 [2024-11-26 18:59:31.366342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75523 ] 00:12:40.422 [2024-11-26 18:59:31.567006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.422 [2024-11-26 18:59:31.731679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.680 [2024-11-26 18:59:31.943727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.680 [2024-11-26 18:59:31.943791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.938 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.938 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:40.938 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.938 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.938 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.939 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.197 BaseBdev1_malloc 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.197 [2024-11-26 18:59:32.349025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:41.197 [2024-11-26 18:59:32.349128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.197 [2024-11-26 18:59:32.349165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:41.197 [2024-11-26 18:59:32.349191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.197 [2024-11-26 18:59:32.352358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.197 [2024-11-26 18:59:32.352412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.197 BaseBdev1 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.197 BaseBdev2_malloc 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.197 [2024-11-26 18:59:32.406710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:41.197 [2024-11-26 18:59:32.407791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.197 [2024-11-26 18:59:32.407844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:41.197 [2024-11-26 18:59:32.407868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.197 [2024-11-26 18:59:32.411089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.197 [2024-11-26 18:59:32.411150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:41.197 BaseBdev2 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.197 spare_malloc 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.197 spare_delay 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.197 [2024-11-26 18:59:32.493086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:41.197 [2024-11-26 18:59:32.493407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.197 [2024-11-26 18:59:32.493457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:41.197 [2024-11-26 18:59:32.493479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.197 [2024-11-26 18:59:32.496802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.197 [2024-11-26 18:59:32.497016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:41.197 spare 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.197 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.197 [2024-11-26 18:59:32.505367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.198 [2024-11-26 18:59:32.508061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.198 [2024-11-26 18:59:32.508363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:41.198 [2024-11-26 18:59:32.508395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:41.198 [2024-11-26 18:59:32.508782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:41.198 [2024-11-26 18:59:32.509060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:41.198 [2024-11-26 18:59:32.509090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:41.198 [2024-11-26 18:59:32.509391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.198 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.456 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.456 "name": "raid_bdev1", 00:12:41.456 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:41.456 "strip_size_kb": 0, 00:12:41.456 "state": "online", 00:12:41.456 "raid_level": "raid1", 00:12:41.456 "superblock": false, 00:12:41.456 "num_base_bdevs": 2, 00:12:41.456 "num_base_bdevs_discovered": 2, 00:12:41.456 "num_base_bdevs_operational": 2, 00:12:41.456 "base_bdevs_list": [ 00:12:41.456 { 00:12:41.456 "name": "BaseBdev1", 00:12:41.456 "uuid": "fe7a0e43-925b-5e45-89a7-fe3fb727bcf7", 00:12:41.456 "is_configured": true, 00:12:41.456 "data_offset": 0, 00:12:41.456 "data_size": 65536 00:12:41.456 }, 00:12:41.456 { 00:12:41.456 "name": "BaseBdev2", 00:12:41.456 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:41.456 "is_configured": true, 00:12:41.456 "data_offset": 0, 00:12:41.456 "data_size": 65536 00:12:41.456 } 00:12:41.456 ] 00:12:41.456 }' 00:12:41.456 18:59:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.456 18:59:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.714 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.714 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.714 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:41.714 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.714 [2024-11-26 18:59:33.065938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.972 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:42.230 [2024-11-26 18:59:33.485813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:42.230 /dev/nbd0 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.230 1+0 records in 00:12:42.230 1+0 records out 00:12:42.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469203 s, 8.7 MB/s 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.230 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:42.231 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.231 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.231 18:59:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:42.231 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.231 18:59:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:42.231 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:42.231 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:42.231 18:59:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:50.340 65536+0 records in 00:12:50.340 65536+0 records out 00:12:50.340 33554432 bytes (34 MB, 32 MiB) copied, 6.93875 s, 4.8 MB/s 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:50.340 [2024-11-26 18:59:40.799019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.340 [2024-11-26 18:59:40.811168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.340 "name": "raid_bdev1", 00:12:50.340 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:50.340 "strip_size_kb": 0, 00:12:50.340 "state": "online", 00:12:50.340 "raid_level": "raid1", 00:12:50.340 "superblock": false, 00:12:50.340 "num_base_bdevs": 2, 00:12:50.340 "num_base_bdevs_discovered": 1, 00:12:50.340 "num_base_bdevs_operational": 1, 00:12:50.340 "base_bdevs_list": [ 00:12:50.340 { 00:12:50.340 "name": null, 00:12:50.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.340 "is_configured": false, 00:12:50.340 "data_offset": 0, 00:12:50.340 "data_size": 65536 00:12:50.340 }, 00:12:50.340 { 00:12:50.340 "name": "BaseBdev2", 00:12:50.340 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:50.340 "is_configured": true, 00:12:50.340 "data_offset": 0, 00:12:50.340 "data_size": 65536 00:12:50.340 } 00:12:50.340 ] 00:12:50.340 }' 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.340 18:59:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.340 18:59:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:50.340 18:59:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.340 18:59:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.340 [2024-11-26 18:59:41.359314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.340 [2024-11-26 18:59:41.376092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:50.340 18:59:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.340 18:59:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:50.340 [2024-11-26 18:59:41.378744] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.278 "name": "raid_bdev1", 00:12:51.278 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:51.278 "strip_size_kb": 0, 00:12:51.278 "state": "online", 00:12:51.278 "raid_level": "raid1", 00:12:51.278 "superblock": false, 00:12:51.278 "num_base_bdevs": 2, 00:12:51.278 "num_base_bdevs_discovered": 2, 00:12:51.278 "num_base_bdevs_operational": 2, 00:12:51.278 "process": { 00:12:51.278 "type": "rebuild", 00:12:51.278 "target": "spare", 00:12:51.278 "progress": { 00:12:51.278 "blocks": 20480, 00:12:51.278 "percent": 31 00:12:51.278 } 00:12:51.278 }, 00:12:51.278 "base_bdevs_list": [ 00:12:51.278 { 00:12:51.278 "name": "spare", 00:12:51.278 "uuid": "2be90fd7-7ac2-5692-ab1d-396f437269a2", 00:12:51.278 "is_configured": true, 00:12:51.278 "data_offset": 0, 00:12:51.278 "data_size": 65536 00:12:51.278 }, 00:12:51.278 { 00:12:51.278 "name": "BaseBdev2", 00:12:51.278 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:51.278 "is_configured": true, 00:12:51.278 "data_offset": 0, 00:12:51.278 "data_size": 65536 00:12:51.278 } 00:12:51.278 ] 00:12:51.278 }' 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.278 [2024-11-26 18:59:42.548334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.278 [2024-11-26 18:59:42.588370] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:51.278 [2024-11-26 18:59:42.588486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.278 [2024-11-26 18:59:42.588512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.278 [2024-11-26 18:59:42.588529] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.278 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.537 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.537 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.537 "name": "raid_bdev1", 00:12:51.537 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:51.537 "strip_size_kb": 0, 00:12:51.537 "state": "online", 00:12:51.537 "raid_level": "raid1", 00:12:51.537 "superblock": false, 00:12:51.537 "num_base_bdevs": 2, 00:12:51.537 "num_base_bdevs_discovered": 1, 00:12:51.537 "num_base_bdevs_operational": 1, 00:12:51.537 "base_bdevs_list": [ 00:12:51.537 { 00:12:51.537 "name": null, 00:12:51.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.537 "is_configured": false, 00:12:51.537 "data_offset": 0, 00:12:51.537 "data_size": 65536 00:12:51.537 }, 00:12:51.537 { 00:12:51.537 "name": "BaseBdev2", 00:12:51.537 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:51.537 "is_configured": true, 00:12:51.537 "data_offset": 0, 00:12:51.537 "data_size": 65536 00:12:51.537 } 00:12:51.537 ] 00:12:51.537 }' 00:12:51.537 18:59:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.537 18:59:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.797 18:59:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.057 "name": "raid_bdev1", 00:12:52.057 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:52.057 "strip_size_kb": 0, 00:12:52.057 "state": "online", 00:12:52.057 "raid_level": "raid1", 00:12:52.057 "superblock": false, 00:12:52.057 "num_base_bdevs": 2, 00:12:52.057 "num_base_bdevs_discovered": 1, 00:12:52.057 "num_base_bdevs_operational": 1, 00:12:52.057 "base_bdevs_list": [ 00:12:52.057 { 00:12:52.057 "name": null, 00:12:52.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.057 "is_configured": false, 00:12:52.057 "data_offset": 0, 00:12:52.057 "data_size": 65536 00:12:52.057 }, 00:12:52.057 { 00:12:52.057 "name": "BaseBdev2", 00:12:52.057 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:52.057 "is_configured": true, 00:12:52.057 "data_offset": 0, 00:12:52.057 "data_size": 65536 00:12:52.057 } 00:12:52.057 ] 00:12:52.057 }' 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.057 [2024-11-26 18:59:43.301215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.057 [2024-11-26 18:59:43.317365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.057 18:59:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:52.057 [2024-11-26 18:59:43.319994] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.995 18:59:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.253 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.253 "name": "raid_bdev1", 00:12:53.253 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:53.253 "strip_size_kb": 0, 00:12:53.254 "state": "online", 00:12:53.254 "raid_level": "raid1", 00:12:53.254 "superblock": false, 00:12:53.254 "num_base_bdevs": 2, 00:12:53.254 "num_base_bdevs_discovered": 2, 00:12:53.254 "num_base_bdevs_operational": 2, 00:12:53.254 "process": { 00:12:53.254 "type": "rebuild", 00:12:53.254 "target": "spare", 00:12:53.254 "progress": { 00:12:53.254 "blocks": 20480, 00:12:53.254 "percent": 31 00:12:53.254 } 00:12:53.254 }, 00:12:53.254 "base_bdevs_list": [ 00:12:53.254 { 00:12:53.254 "name": "spare", 00:12:53.254 "uuid": "2be90fd7-7ac2-5692-ab1d-396f437269a2", 00:12:53.254 "is_configured": true, 00:12:53.254 "data_offset": 0, 00:12:53.254 "data_size": 65536 00:12:53.254 }, 00:12:53.254 { 00:12:53.254 "name": "BaseBdev2", 00:12:53.254 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:53.254 "is_configured": true, 00:12:53.254 "data_offset": 0, 00:12:53.254 "data_size": 65536 00:12:53.254 } 00:12:53.254 ] 00:12:53.254 }' 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=403 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.254 "name": "raid_bdev1", 00:12:53.254 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:53.254 "strip_size_kb": 0, 00:12:53.254 "state": "online", 00:12:53.254 "raid_level": "raid1", 00:12:53.254 "superblock": false, 00:12:53.254 "num_base_bdevs": 2, 00:12:53.254 "num_base_bdevs_discovered": 2, 00:12:53.254 "num_base_bdevs_operational": 2, 00:12:53.254 "process": { 00:12:53.254 "type": "rebuild", 00:12:53.254 "target": "spare", 00:12:53.254 "progress": { 00:12:53.254 "blocks": 22528, 00:12:53.254 "percent": 34 00:12:53.254 } 00:12:53.254 }, 00:12:53.254 "base_bdevs_list": [ 00:12:53.254 { 00:12:53.254 "name": "spare", 00:12:53.254 "uuid": "2be90fd7-7ac2-5692-ab1d-396f437269a2", 00:12:53.254 "is_configured": true, 00:12:53.254 "data_offset": 0, 00:12:53.254 "data_size": 65536 00:12:53.254 }, 00:12:53.254 { 00:12:53.254 "name": "BaseBdev2", 00:12:53.254 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:53.254 "is_configured": true, 00:12:53.254 "data_offset": 0, 00:12:53.254 "data_size": 65536 00:12:53.254 } 00:12:53.254 ] 00:12:53.254 }' 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.254 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.513 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.513 18:59:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.449 "name": "raid_bdev1", 00:12:54.449 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:54.449 "strip_size_kb": 0, 00:12:54.449 "state": "online", 00:12:54.449 "raid_level": "raid1", 00:12:54.449 "superblock": false, 00:12:54.449 "num_base_bdevs": 2, 00:12:54.449 "num_base_bdevs_discovered": 2, 00:12:54.449 "num_base_bdevs_operational": 2, 00:12:54.449 "process": { 00:12:54.449 "type": "rebuild", 00:12:54.449 "target": "spare", 00:12:54.449 "progress": { 00:12:54.449 "blocks": 47104, 00:12:54.449 "percent": 71 00:12:54.449 } 00:12:54.449 }, 00:12:54.449 "base_bdevs_list": [ 00:12:54.449 { 00:12:54.449 "name": "spare", 00:12:54.449 "uuid": "2be90fd7-7ac2-5692-ab1d-396f437269a2", 00:12:54.449 "is_configured": true, 00:12:54.449 "data_offset": 0, 00:12:54.449 "data_size": 65536 00:12:54.449 }, 00:12:54.449 { 00:12:54.449 "name": "BaseBdev2", 00:12:54.449 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:54.449 "is_configured": true, 00:12:54.449 "data_offset": 0, 00:12:54.449 "data_size": 65536 00:12:54.449 } 00:12:54.449 ] 00:12:54.449 }' 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.449 18:59:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.386 [2024-11-26 18:59:46.546032] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:55.386 [2024-11-26 18:59:46.546191] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:55.386 [2024-11-26 18:59:46.546258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.645 "name": "raid_bdev1", 00:12:55.645 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:55.645 "strip_size_kb": 0, 00:12:55.645 "state": "online", 00:12:55.645 "raid_level": "raid1", 00:12:55.645 "superblock": false, 00:12:55.645 "num_base_bdevs": 2, 00:12:55.645 "num_base_bdevs_discovered": 2, 00:12:55.645 "num_base_bdevs_operational": 2, 00:12:55.645 "base_bdevs_list": [ 00:12:55.645 { 00:12:55.645 "name": "spare", 00:12:55.645 "uuid": "2be90fd7-7ac2-5692-ab1d-396f437269a2", 00:12:55.645 "is_configured": true, 00:12:55.645 "data_offset": 0, 00:12:55.645 "data_size": 65536 00:12:55.645 }, 00:12:55.645 { 00:12:55.645 "name": "BaseBdev2", 00:12:55.645 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:55.645 "is_configured": true, 00:12:55.645 "data_offset": 0, 00:12:55.645 "data_size": 65536 00:12:55.645 } 00:12:55.645 ] 00:12:55.645 }' 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.645 18:59:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.904 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.904 "name": "raid_bdev1", 00:12:55.904 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:55.904 "strip_size_kb": 0, 00:12:55.904 "state": "online", 00:12:55.904 "raid_level": "raid1", 00:12:55.904 "superblock": false, 00:12:55.904 "num_base_bdevs": 2, 00:12:55.904 "num_base_bdevs_discovered": 2, 00:12:55.904 "num_base_bdevs_operational": 2, 00:12:55.904 "base_bdevs_list": [ 00:12:55.904 { 00:12:55.904 "name": "spare", 00:12:55.904 "uuid": "2be90fd7-7ac2-5692-ab1d-396f437269a2", 00:12:55.904 "is_configured": true, 00:12:55.904 "data_offset": 0, 00:12:55.904 "data_size": 65536 00:12:55.904 }, 00:12:55.904 { 00:12:55.904 "name": "BaseBdev2", 00:12:55.904 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:55.904 "is_configured": true, 00:12:55.904 "data_offset": 0, 00:12:55.904 "data_size": 65536 00:12:55.905 } 00:12:55.905 ] 00:12:55.905 }' 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.905 "name": "raid_bdev1", 00:12:55.905 "uuid": "32b23ef2-40e1-4603-a054-a91474914448", 00:12:55.905 "strip_size_kb": 0, 00:12:55.905 "state": "online", 00:12:55.905 "raid_level": "raid1", 00:12:55.905 "superblock": false, 00:12:55.905 "num_base_bdevs": 2, 00:12:55.905 "num_base_bdevs_discovered": 2, 00:12:55.905 "num_base_bdevs_operational": 2, 00:12:55.905 "base_bdevs_list": [ 00:12:55.905 { 00:12:55.905 "name": "spare", 00:12:55.905 "uuid": "2be90fd7-7ac2-5692-ab1d-396f437269a2", 00:12:55.905 "is_configured": true, 00:12:55.905 "data_offset": 0, 00:12:55.905 "data_size": 65536 00:12:55.905 }, 00:12:55.905 { 00:12:55.905 "name": "BaseBdev2", 00:12:55.905 "uuid": "b2b59987-c65d-5d8f-b4f7-d7a5a0c7f400", 00:12:55.905 "is_configured": true, 00:12:55.905 "data_offset": 0, 00:12:55.905 "data_size": 65536 00:12:55.905 } 00:12:55.905 ] 00:12:55.905 }' 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.905 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.472 [2024-11-26 18:59:47.660427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.472 [2024-11-26 18:59:47.660473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.472 [2024-11-26 18:59:47.660608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.472 [2024-11-26 18:59:47.660708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.472 [2024-11-26 18:59:47.660727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:56.472 18:59:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:56.731 /dev/nbd0 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.731 1+0 records in 00:12:56.731 1+0 records out 00:12:56.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036665 s, 11.2 MB/s 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:56.731 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:57.297 /dev/nbd1 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.298 1+0 records in 00:12:57.298 1+0 records out 00:12:57.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496641 s, 8.2 MB/s 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.298 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.556 18:59:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75523 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75523 ']' 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75523 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.815 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75523 00:12:58.072 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.072 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.072 killing process with pid 75523 00:12:58.072 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75523' 00:12:58.072 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75523 00:12:58.072 Received shutdown signal, test time was about 60.000000 seconds 00:12:58.072 00:12:58.072 Latency(us) 00:12:58.072 [2024-11-26T18:59:49.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.072 [2024-11-26T18:59:49.439Z] =================================================================================================================== 00:12:58.072 [2024-11-26T18:59:49.439Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:58.072 [2024-11-26 18:59:49.201557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.072 18:59:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75523 00:12:58.330 [2024-11-26 18:59:49.480546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:59.267 00:12:59.267 real 0m19.322s 00:12:59.267 user 0m22.018s 00:12:59.267 sys 0m3.844s 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.267 ************************************ 00:12:59.267 END TEST raid_rebuild_test 00:12:59.267 ************************************ 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.267 18:59:50 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:59.267 18:59:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:59.267 18:59:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.267 18:59:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.267 ************************************ 00:12:59.267 START TEST raid_rebuild_test_sb 00:12:59.267 ************************************ 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:59.267 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75980 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75980 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75980 ']' 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.526 18:59:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.526 [2024-11-26 18:59:50.745523] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:12:59.526 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:59.526 Zero copy mechanism will not be used. 00:12:59.526 [2024-11-26 18:59:50.745749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75980 ] 00:12:59.785 [2024-11-26 18:59:50.934642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.785 [2024-11-26 18:59:51.067647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.043 [2024-11-26 18:59:51.277368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.043 [2024-11-26 18:59:51.277427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.610 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.610 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.611 BaseBdev1_malloc 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.611 [2024-11-26 18:59:51.800170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:00.611 [2024-11-26 18:59:51.800259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.611 [2024-11-26 18:59:51.800295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:00.611 [2024-11-26 18:59:51.800316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.611 [2024-11-26 18:59:51.803243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.611 [2024-11-26 18:59:51.803295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.611 BaseBdev1 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.611 BaseBdev2_malloc 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.611 [2024-11-26 18:59:51.856337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:00.611 [2024-11-26 18:59:51.856422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.611 [2024-11-26 18:59:51.856457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:00.611 [2024-11-26 18:59:51.856478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.611 [2024-11-26 18:59:51.859357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.611 [2024-11-26 18:59:51.859405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:00.611 BaseBdev2 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.611 spare_malloc 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.611 spare_delay 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.611 [2024-11-26 18:59:51.923994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:00.611 [2024-11-26 18:59:51.924076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.611 [2024-11-26 18:59:51.924111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:00.611 [2024-11-26 18:59:51.924131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.611 [2024-11-26 18:59:51.927112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.611 [2024-11-26 18:59:51.927165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:00.611 spare 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.611 [2024-11-26 18:59:51.932126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.611 [2024-11-26 18:59:51.934595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.611 [2024-11-26 18:59:51.934842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:00.611 [2024-11-26 18:59:51.934866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.611 [2024-11-26 18:59:51.935235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:00.611 [2024-11-26 18:59:51.935469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:00.611 [2024-11-26 18:59:51.935485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:00.611 [2024-11-26 18:59:51.935704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.611 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.870 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.871 "name": "raid_bdev1", 00:13:00.871 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:00.871 "strip_size_kb": 0, 00:13:00.871 "state": "online", 00:13:00.871 "raid_level": "raid1", 00:13:00.871 "superblock": true, 00:13:00.871 "num_base_bdevs": 2, 00:13:00.871 "num_base_bdevs_discovered": 2, 00:13:00.871 "num_base_bdevs_operational": 2, 00:13:00.871 "base_bdevs_list": [ 00:13:00.871 { 00:13:00.871 "name": "BaseBdev1", 00:13:00.871 "uuid": "12275d16-fb2b-50cb-a33b-6edc05573671", 00:13:00.871 "is_configured": true, 00:13:00.871 "data_offset": 2048, 00:13:00.871 "data_size": 63488 00:13:00.871 }, 00:13:00.871 { 00:13:00.871 "name": "BaseBdev2", 00:13:00.871 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:00.871 "is_configured": true, 00:13:00.871 "data_offset": 2048, 00:13:00.871 "data_size": 63488 00:13:00.871 } 00:13:00.871 ] 00:13:00.871 }' 00:13:00.871 18:59:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.871 18:59:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.129 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.129 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.129 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.129 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:01.129 [2024-11-26 18:59:52.460629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.129 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:01.443 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:01.722 [2024-11-26 18:59:52.872463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:01.722 /dev/nbd0 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.722 1+0 records in 00:13:01.722 1+0 records out 00:13:01.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412098 s, 9.9 MB/s 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:01.722 18:59:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:08.341 63488+0 records in 00:13:08.341 63488+0 records out 00:13:08.341 32505856 bytes (33 MB, 31 MiB) copied, 6.20052 s, 5.2 MB/s 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:08.341 [2024-11-26 18:59:59.439865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.341 [2024-11-26 18:59:59.468003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.341 "name": "raid_bdev1", 00:13:08.341 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:08.341 "strip_size_kb": 0, 00:13:08.341 "state": "online", 00:13:08.341 "raid_level": "raid1", 00:13:08.341 "superblock": true, 00:13:08.341 "num_base_bdevs": 2, 00:13:08.341 "num_base_bdevs_discovered": 1, 00:13:08.341 "num_base_bdevs_operational": 1, 00:13:08.341 "base_bdevs_list": [ 00:13:08.341 { 00:13:08.341 "name": null, 00:13:08.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.341 "is_configured": false, 00:13:08.341 "data_offset": 0, 00:13:08.341 "data_size": 63488 00:13:08.341 }, 00:13:08.341 { 00:13:08.341 "name": "BaseBdev2", 00:13:08.341 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:08.341 "is_configured": true, 00:13:08.341 "data_offset": 2048, 00:13:08.341 "data_size": 63488 00:13:08.341 } 00:13:08.341 ] 00:13:08.341 }' 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.341 18:59:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.909 18:59:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:08.909 18:59:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.909 18:59:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.909 [2024-11-26 18:59:59.988294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.909 [2024-11-26 19:00:00.005223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:08.909 19:00:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.909 19:00:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:08.909 [2024-11-26 19:00:00.007936] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.846 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.846 "name": "raid_bdev1", 00:13:09.846 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:09.846 "strip_size_kb": 0, 00:13:09.846 "state": "online", 00:13:09.846 "raid_level": "raid1", 00:13:09.847 "superblock": true, 00:13:09.847 "num_base_bdevs": 2, 00:13:09.847 "num_base_bdevs_discovered": 2, 00:13:09.847 "num_base_bdevs_operational": 2, 00:13:09.847 "process": { 00:13:09.847 "type": "rebuild", 00:13:09.847 "target": "spare", 00:13:09.847 "progress": { 00:13:09.847 "blocks": 20480, 00:13:09.847 "percent": 32 00:13:09.847 } 00:13:09.847 }, 00:13:09.847 "base_bdevs_list": [ 00:13:09.847 { 00:13:09.847 "name": "spare", 00:13:09.847 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:09.847 "is_configured": true, 00:13:09.847 "data_offset": 2048, 00:13:09.847 "data_size": 63488 00:13:09.847 }, 00:13:09.847 { 00:13:09.847 "name": "BaseBdev2", 00:13:09.847 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:09.847 "is_configured": true, 00:13:09.847 "data_offset": 2048, 00:13:09.847 "data_size": 63488 00:13:09.847 } 00:13:09.847 ] 00:13:09.847 }' 00:13:09.847 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.847 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.847 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.847 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.847 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:09.847 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.847 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.847 [2024-11-26 19:00:01.205345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.106 [2024-11-26 19:00:01.217369] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.106 [2024-11-26 19:00:01.217481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.106 [2024-11-26 19:00:01.217508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.106 [2024-11-26 19:00:01.217529] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.106 "name": "raid_bdev1", 00:13:10.106 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:10.106 "strip_size_kb": 0, 00:13:10.106 "state": "online", 00:13:10.106 "raid_level": "raid1", 00:13:10.106 "superblock": true, 00:13:10.106 "num_base_bdevs": 2, 00:13:10.106 "num_base_bdevs_discovered": 1, 00:13:10.106 "num_base_bdevs_operational": 1, 00:13:10.106 "base_bdevs_list": [ 00:13:10.106 { 00:13:10.106 "name": null, 00:13:10.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.106 "is_configured": false, 00:13:10.106 "data_offset": 0, 00:13:10.106 "data_size": 63488 00:13:10.106 }, 00:13:10.106 { 00:13:10.106 "name": "BaseBdev2", 00:13:10.106 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:10.106 "is_configured": true, 00:13:10.106 "data_offset": 2048, 00:13:10.106 "data_size": 63488 00:13:10.106 } 00:13:10.106 ] 00:13:10.106 }' 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.106 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.673 "name": "raid_bdev1", 00:13:10.673 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:10.673 "strip_size_kb": 0, 00:13:10.673 "state": "online", 00:13:10.673 "raid_level": "raid1", 00:13:10.673 "superblock": true, 00:13:10.673 "num_base_bdevs": 2, 00:13:10.673 "num_base_bdevs_discovered": 1, 00:13:10.673 "num_base_bdevs_operational": 1, 00:13:10.673 "base_bdevs_list": [ 00:13:10.673 { 00:13:10.673 "name": null, 00:13:10.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.673 "is_configured": false, 00:13:10.673 "data_offset": 0, 00:13:10.673 "data_size": 63488 00:13:10.673 }, 00:13:10.673 { 00:13:10.673 "name": "BaseBdev2", 00:13:10.673 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:10.673 "is_configured": true, 00:13:10.673 "data_offset": 2048, 00:13:10.673 "data_size": 63488 00:13:10.673 } 00:13:10.673 ] 00:13:10.673 }' 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.673 [2024-11-26 19:00:01.922056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.673 [2024-11-26 19:00:01.937939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.673 19:00:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:10.673 [2024-11-26 19:00:01.940579] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.608 19:00:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.867 19:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.867 "name": "raid_bdev1", 00:13:11.867 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:11.867 "strip_size_kb": 0, 00:13:11.867 "state": "online", 00:13:11.867 "raid_level": "raid1", 00:13:11.867 "superblock": true, 00:13:11.867 "num_base_bdevs": 2, 00:13:11.867 "num_base_bdevs_discovered": 2, 00:13:11.867 "num_base_bdevs_operational": 2, 00:13:11.867 "process": { 00:13:11.867 "type": "rebuild", 00:13:11.867 "target": "spare", 00:13:11.867 "progress": { 00:13:11.867 "blocks": 20480, 00:13:11.867 "percent": 32 00:13:11.867 } 00:13:11.867 }, 00:13:11.867 "base_bdevs_list": [ 00:13:11.867 { 00:13:11.867 "name": "spare", 00:13:11.867 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:11.867 "is_configured": true, 00:13:11.867 "data_offset": 2048, 00:13:11.867 "data_size": 63488 00:13:11.867 }, 00:13:11.867 { 00:13:11.867 "name": "BaseBdev2", 00:13:11.867 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:11.867 "is_configured": true, 00:13:11.867 "data_offset": 2048, 00:13:11.867 "data_size": 63488 00:13:11.867 } 00:13:11.867 ] 00:13:11.867 }' 00:13:11.867 19:00:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:11.867 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=422 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.867 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.867 "name": "raid_bdev1", 00:13:11.867 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:11.867 "strip_size_kb": 0, 00:13:11.867 "state": "online", 00:13:11.867 "raid_level": "raid1", 00:13:11.867 "superblock": true, 00:13:11.867 "num_base_bdevs": 2, 00:13:11.867 "num_base_bdevs_discovered": 2, 00:13:11.867 "num_base_bdevs_operational": 2, 00:13:11.867 "process": { 00:13:11.867 "type": "rebuild", 00:13:11.867 "target": "spare", 00:13:11.867 "progress": { 00:13:11.867 "blocks": 22528, 00:13:11.867 "percent": 35 00:13:11.867 } 00:13:11.867 }, 00:13:11.867 "base_bdevs_list": [ 00:13:11.867 { 00:13:11.867 "name": "spare", 00:13:11.867 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:11.867 "is_configured": true, 00:13:11.867 "data_offset": 2048, 00:13:11.867 "data_size": 63488 00:13:11.868 }, 00:13:11.868 { 00:13:11.868 "name": "BaseBdev2", 00:13:11.868 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:11.868 "is_configured": true, 00:13:11.868 "data_offset": 2048, 00:13:11.868 "data_size": 63488 00:13:11.868 } 00:13:11.868 ] 00:13:11.868 }' 00:13:11.868 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.868 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.868 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.126 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.126 19:00:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.063 "name": "raid_bdev1", 00:13:13.063 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:13.063 "strip_size_kb": 0, 00:13:13.063 "state": "online", 00:13:13.063 "raid_level": "raid1", 00:13:13.063 "superblock": true, 00:13:13.063 "num_base_bdevs": 2, 00:13:13.063 "num_base_bdevs_discovered": 2, 00:13:13.063 "num_base_bdevs_operational": 2, 00:13:13.063 "process": { 00:13:13.063 "type": "rebuild", 00:13:13.063 "target": "spare", 00:13:13.063 "progress": { 00:13:13.063 "blocks": 47104, 00:13:13.063 "percent": 74 00:13:13.063 } 00:13:13.063 }, 00:13:13.063 "base_bdevs_list": [ 00:13:13.063 { 00:13:13.063 "name": "spare", 00:13:13.063 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:13.063 "is_configured": true, 00:13:13.063 "data_offset": 2048, 00:13:13.063 "data_size": 63488 00:13:13.063 }, 00:13:13.063 { 00:13:13.063 "name": "BaseBdev2", 00:13:13.063 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:13.063 "is_configured": true, 00:13:13.063 "data_offset": 2048, 00:13:13.063 "data_size": 63488 00:13:13.063 } 00:13:13.063 ] 00:13:13.063 }' 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.063 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.338 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.338 19:00:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.930 [2024-11-26 19:00:05.065479] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:13.930 [2024-11-26 19:00:05.065599] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:13.930 [2024-11-26 19:00:05.065804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.190 "name": "raid_bdev1", 00:13:14.190 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:14.190 "strip_size_kb": 0, 00:13:14.190 "state": "online", 00:13:14.190 "raid_level": "raid1", 00:13:14.190 "superblock": true, 00:13:14.190 "num_base_bdevs": 2, 00:13:14.190 "num_base_bdevs_discovered": 2, 00:13:14.190 "num_base_bdevs_operational": 2, 00:13:14.190 "base_bdevs_list": [ 00:13:14.190 { 00:13:14.190 "name": "spare", 00:13:14.190 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:14.190 "is_configured": true, 00:13:14.190 "data_offset": 2048, 00:13:14.190 "data_size": 63488 00:13:14.190 }, 00:13:14.190 { 00:13:14.190 "name": "BaseBdev2", 00:13:14.190 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:14.190 "is_configured": true, 00:13:14.190 "data_offset": 2048, 00:13:14.190 "data_size": 63488 00:13:14.190 } 00:13:14.190 ] 00:13:14.190 }' 00:13:14.190 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.449 "name": "raid_bdev1", 00:13:14.449 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:14.449 "strip_size_kb": 0, 00:13:14.449 "state": "online", 00:13:14.449 "raid_level": "raid1", 00:13:14.449 "superblock": true, 00:13:14.449 "num_base_bdevs": 2, 00:13:14.449 "num_base_bdevs_discovered": 2, 00:13:14.449 "num_base_bdevs_operational": 2, 00:13:14.449 "base_bdevs_list": [ 00:13:14.449 { 00:13:14.449 "name": "spare", 00:13:14.449 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:14.449 "is_configured": true, 00:13:14.449 "data_offset": 2048, 00:13:14.449 "data_size": 63488 00:13:14.449 }, 00:13:14.449 { 00:13:14.449 "name": "BaseBdev2", 00:13:14.449 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:14.449 "is_configured": true, 00:13:14.449 "data_offset": 2048, 00:13:14.449 "data_size": 63488 00:13:14.449 } 00:13:14.449 ] 00:13:14.449 }' 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.449 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.450 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.450 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.708 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.708 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.708 "name": "raid_bdev1", 00:13:14.708 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:14.708 "strip_size_kb": 0, 00:13:14.708 "state": "online", 00:13:14.708 "raid_level": "raid1", 00:13:14.708 "superblock": true, 00:13:14.708 "num_base_bdevs": 2, 00:13:14.708 "num_base_bdevs_discovered": 2, 00:13:14.708 "num_base_bdevs_operational": 2, 00:13:14.708 "base_bdevs_list": [ 00:13:14.708 { 00:13:14.708 "name": "spare", 00:13:14.708 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:14.708 "is_configured": true, 00:13:14.708 "data_offset": 2048, 00:13:14.708 "data_size": 63488 00:13:14.708 }, 00:13:14.708 { 00:13:14.708 "name": "BaseBdev2", 00:13:14.708 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:14.708 "is_configured": true, 00:13:14.708 "data_offset": 2048, 00:13:14.708 "data_size": 63488 00:13:14.708 } 00:13:14.708 ] 00:13:14.708 }' 00:13:14.708 19:00:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.708 19:00:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.968 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:14.968 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.968 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.968 [2024-11-26 19:00:06.311297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.968 [2024-11-26 19:00:06.311341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.968 [2024-11-26 19:00:06.311453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.968 [2024-11-26 19:00:06.311563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.968 [2024-11-26 19:00:06.311585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:14.968 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.968 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:14.968 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.968 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.968 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.968 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.227 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:15.485 /dev/nbd0 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.485 1+0 records in 00:13:15.485 1+0 records out 00:13:15.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373365 s, 11.0 MB/s 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:15.485 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.486 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.486 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:15.486 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.486 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.486 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:15.744 /dev/nbd1 00:13:15.744 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:15.744 19:00:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:15.744 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:15.744 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:15.744 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.744 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.744 19:00:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:15.744 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:15.744 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.744 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.744 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.744 1+0 records in 00:13:15.744 1+0 records out 00:13:15.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477608 s, 8.6 MB/s 00:13:15.744 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.744 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:15.745 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.745 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.745 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:15.745 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.745 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.745 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:16.004 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:16.004 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.004 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:16.004 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.004 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:16.004 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.004 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.263 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.542 [2024-11-26 19:00:07.766480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:16.542 [2024-11-26 19:00:07.766565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.542 [2024-11-26 19:00:07.766608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:16.542 [2024-11-26 19:00:07.766625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.542 [2024-11-26 19:00:07.769706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.542 [2024-11-26 19:00:07.769769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:16.542 [2024-11-26 19:00:07.769948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:16.542 [2024-11-26 19:00:07.770025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.542 [2024-11-26 19:00:07.770219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.542 spare 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.542 [2024-11-26 19:00:07.870374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:16.542 [2024-11-26 19:00:07.870463] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:16.542 [2024-11-26 19:00:07.870943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:16.542 [2024-11-26 19:00:07.871240] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:16.542 [2024-11-26 19:00:07.871258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:16.542 [2024-11-26 19:00:07.871559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.542 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.808 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.808 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.808 "name": "raid_bdev1", 00:13:16.808 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:16.808 "strip_size_kb": 0, 00:13:16.808 "state": "online", 00:13:16.808 "raid_level": "raid1", 00:13:16.808 "superblock": true, 00:13:16.808 "num_base_bdevs": 2, 00:13:16.808 "num_base_bdevs_discovered": 2, 00:13:16.808 "num_base_bdevs_operational": 2, 00:13:16.808 "base_bdevs_list": [ 00:13:16.808 { 00:13:16.808 "name": "spare", 00:13:16.808 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:16.808 "is_configured": true, 00:13:16.808 "data_offset": 2048, 00:13:16.808 "data_size": 63488 00:13:16.808 }, 00:13:16.808 { 00:13:16.808 "name": "BaseBdev2", 00:13:16.808 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:16.808 "is_configured": true, 00:13:16.808 "data_offset": 2048, 00:13:16.808 "data_size": 63488 00:13:16.808 } 00:13:16.808 ] 00:13:16.808 }' 00:13:16.808 19:00:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.808 19:00:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.068 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.327 "name": "raid_bdev1", 00:13:17.327 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:17.327 "strip_size_kb": 0, 00:13:17.327 "state": "online", 00:13:17.327 "raid_level": "raid1", 00:13:17.327 "superblock": true, 00:13:17.327 "num_base_bdevs": 2, 00:13:17.327 "num_base_bdevs_discovered": 2, 00:13:17.327 "num_base_bdevs_operational": 2, 00:13:17.327 "base_bdevs_list": [ 00:13:17.327 { 00:13:17.327 "name": "spare", 00:13:17.327 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:17.327 "is_configured": true, 00:13:17.327 "data_offset": 2048, 00:13:17.327 "data_size": 63488 00:13:17.327 }, 00:13:17.327 { 00:13:17.327 "name": "BaseBdev2", 00:13:17.327 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:17.327 "is_configured": true, 00:13:17.327 "data_offset": 2048, 00:13:17.327 "data_size": 63488 00:13:17.327 } 00:13:17.327 ] 00:13:17.327 }' 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.327 [2024-11-26 19:00:08.615719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.327 "name": "raid_bdev1", 00:13:17.327 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:17.327 "strip_size_kb": 0, 00:13:17.327 "state": "online", 00:13:17.327 "raid_level": "raid1", 00:13:17.327 "superblock": true, 00:13:17.327 "num_base_bdevs": 2, 00:13:17.327 "num_base_bdevs_discovered": 1, 00:13:17.327 "num_base_bdevs_operational": 1, 00:13:17.327 "base_bdevs_list": [ 00:13:17.327 { 00:13:17.327 "name": null, 00:13:17.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.327 "is_configured": false, 00:13:17.327 "data_offset": 0, 00:13:17.327 "data_size": 63488 00:13:17.327 }, 00:13:17.327 { 00:13:17.327 "name": "BaseBdev2", 00:13:17.327 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:17.327 "is_configured": true, 00:13:17.327 "data_offset": 2048, 00:13:17.327 "data_size": 63488 00:13:17.327 } 00:13:17.327 ] 00:13:17.327 }' 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.327 19:00:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.895 19:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:17.895 19:00:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.895 19:00:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.895 [2024-11-26 19:00:09.111871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.895 [2024-11-26 19:00:09.112157] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:17.895 [2024-11-26 19:00:09.112185] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:17.895 [2024-11-26 19:00:09.112238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.895 [2024-11-26 19:00:09.127673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:17.895 19:00:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.895 19:00:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:17.895 [2024-11-26 19:00:09.130338] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.830 "name": "raid_bdev1", 00:13:18.830 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:18.830 "strip_size_kb": 0, 00:13:18.830 "state": "online", 00:13:18.830 "raid_level": "raid1", 00:13:18.830 "superblock": true, 00:13:18.830 "num_base_bdevs": 2, 00:13:18.830 "num_base_bdevs_discovered": 2, 00:13:18.830 "num_base_bdevs_operational": 2, 00:13:18.830 "process": { 00:13:18.830 "type": "rebuild", 00:13:18.830 "target": "spare", 00:13:18.830 "progress": { 00:13:18.830 "blocks": 20480, 00:13:18.830 "percent": 32 00:13:18.830 } 00:13:18.830 }, 00:13:18.830 "base_bdevs_list": [ 00:13:18.830 { 00:13:18.830 "name": "spare", 00:13:18.830 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:18.830 "is_configured": true, 00:13:18.830 "data_offset": 2048, 00:13:18.830 "data_size": 63488 00:13:18.830 }, 00:13:18.830 { 00:13:18.830 "name": "BaseBdev2", 00:13:18.830 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:18.830 "is_configured": true, 00:13:18.830 "data_offset": 2048, 00:13:18.830 "data_size": 63488 00:13:18.830 } 00:13:18.830 ] 00:13:18.830 }' 00:13:18.830 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.089 [2024-11-26 19:00:10.303704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.089 [2024-11-26 19:00:10.339641] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:19.089 [2024-11-26 19:00:10.339765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.089 [2024-11-26 19:00:10.339793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.089 [2024-11-26 19:00:10.339808] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.089 "name": "raid_bdev1", 00:13:19.089 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:19.089 "strip_size_kb": 0, 00:13:19.089 "state": "online", 00:13:19.089 "raid_level": "raid1", 00:13:19.089 "superblock": true, 00:13:19.089 "num_base_bdevs": 2, 00:13:19.089 "num_base_bdevs_discovered": 1, 00:13:19.089 "num_base_bdevs_operational": 1, 00:13:19.089 "base_bdevs_list": [ 00:13:19.089 { 00:13:19.089 "name": null, 00:13:19.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.089 "is_configured": false, 00:13:19.089 "data_offset": 0, 00:13:19.089 "data_size": 63488 00:13:19.089 }, 00:13:19.089 { 00:13:19.089 "name": "BaseBdev2", 00:13:19.089 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:19.089 "is_configured": true, 00:13:19.089 "data_offset": 2048, 00:13:19.089 "data_size": 63488 00:13:19.089 } 00:13:19.089 ] 00:13:19.089 }' 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.089 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.654 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.654 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.654 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.654 [2024-11-26 19:00:10.904194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.654 [2024-11-26 19:00:10.904288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.654 [2024-11-26 19:00:10.904322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:19.654 [2024-11-26 19:00:10.904342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.654 [2024-11-26 19:00:10.904990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.654 [2024-11-26 19:00:10.905023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.654 [2024-11-26 19:00:10.905155] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:19.654 [2024-11-26 19:00:10.905181] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:19.654 [2024-11-26 19:00:10.905195] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:19.654 [2024-11-26 19:00:10.905231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.654 [2024-11-26 19:00:10.921017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:19.654 spare 00:13:19.654 19:00:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.654 19:00:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:19.654 [2024-11-26 19:00:10.923669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.589 19:00:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.846 19:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.846 "name": "raid_bdev1", 00:13:20.846 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:20.846 "strip_size_kb": 0, 00:13:20.846 "state": "online", 00:13:20.846 "raid_level": "raid1", 00:13:20.846 "superblock": true, 00:13:20.846 "num_base_bdevs": 2, 00:13:20.846 "num_base_bdevs_discovered": 2, 00:13:20.846 "num_base_bdevs_operational": 2, 00:13:20.846 "process": { 00:13:20.846 "type": "rebuild", 00:13:20.846 "target": "spare", 00:13:20.846 "progress": { 00:13:20.846 "blocks": 20480, 00:13:20.846 "percent": 32 00:13:20.846 } 00:13:20.846 }, 00:13:20.846 "base_bdevs_list": [ 00:13:20.846 { 00:13:20.846 "name": "spare", 00:13:20.846 "uuid": "de2aa0f4-09d9-5d75-beda-c5a4741582c9", 00:13:20.846 "is_configured": true, 00:13:20.846 "data_offset": 2048, 00:13:20.846 "data_size": 63488 00:13:20.846 }, 00:13:20.846 { 00:13:20.846 "name": "BaseBdev2", 00:13:20.846 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:20.846 "is_configured": true, 00:13:20.846 "data_offset": 2048, 00:13:20.846 "data_size": 63488 00:13:20.846 } 00:13:20.846 ] 00:13:20.846 }' 00:13:20.846 19:00:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.846 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.846 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.846 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.846 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:20.846 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.847 [2024-11-26 19:00:12.089133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.847 [2024-11-26 19:00:12.133118] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.847 [2024-11-26 19:00:12.133232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.847 [2024-11-26 19:00:12.133265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.847 [2024-11-26 19:00:12.133278] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.847 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.105 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.105 "name": "raid_bdev1", 00:13:21.105 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:21.105 "strip_size_kb": 0, 00:13:21.105 "state": "online", 00:13:21.105 "raid_level": "raid1", 00:13:21.105 "superblock": true, 00:13:21.105 "num_base_bdevs": 2, 00:13:21.105 "num_base_bdevs_discovered": 1, 00:13:21.105 "num_base_bdevs_operational": 1, 00:13:21.105 "base_bdevs_list": [ 00:13:21.105 { 00:13:21.105 "name": null, 00:13:21.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.105 "is_configured": false, 00:13:21.105 "data_offset": 0, 00:13:21.105 "data_size": 63488 00:13:21.105 }, 00:13:21.105 { 00:13:21.105 "name": "BaseBdev2", 00:13:21.105 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:21.105 "is_configured": true, 00:13:21.105 "data_offset": 2048, 00:13:21.105 "data_size": 63488 00:13:21.105 } 00:13:21.105 ] 00:13:21.105 }' 00:13:21.105 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.105 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.364 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.364 "name": "raid_bdev1", 00:13:21.364 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:21.364 "strip_size_kb": 0, 00:13:21.364 "state": "online", 00:13:21.364 "raid_level": "raid1", 00:13:21.364 "superblock": true, 00:13:21.364 "num_base_bdevs": 2, 00:13:21.364 "num_base_bdevs_discovered": 1, 00:13:21.364 "num_base_bdevs_operational": 1, 00:13:21.364 "base_bdevs_list": [ 00:13:21.364 { 00:13:21.364 "name": null, 00:13:21.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.364 "is_configured": false, 00:13:21.364 "data_offset": 0, 00:13:21.364 "data_size": 63488 00:13:21.364 }, 00:13:21.364 { 00:13:21.364 "name": "BaseBdev2", 00:13:21.364 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:21.364 "is_configured": true, 00:13:21.364 "data_offset": 2048, 00:13:21.364 "data_size": 63488 00:13:21.364 } 00:13:21.364 ] 00:13:21.364 }' 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.624 [2024-11-26 19:00:12.853890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:21.624 [2024-11-26 19:00:12.853982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.624 [2024-11-26 19:00:12.854026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:21.624 [2024-11-26 19:00:12.854055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.624 [2024-11-26 19:00:12.854659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.624 [2024-11-26 19:00:12.854694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:21.624 [2024-11-26 19:00:12.854811] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:21.624 [2024-11-26 19:00:12.854840] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:21.624 [2024-11-26 19:00:12.854859] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:21.624 [2024-11-26 19:00:12.854873] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:21.624 BaseBdev1 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.624 19:00:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.560 19:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.819 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.820 "name": "raid_bdev1", 00:13:22.820 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:22.820 "strip_size_kb": 0, 00:13:22.820 "state": "online", 00:13:22.820 "raid_level": "raid1", 00:13:22.820 "superblock": true, 00:13:22.820 "num_base_bdevs": 2, 00:13:22.820 "num_base_bdevs_discovered": 1, 00:13:22.820 "num_base_bdevs_operational": 1, 00:13:22.820 "base_bdevs_list": [ 00:13:22.820 { 00:13:22.820 "name": null, 00:13:22.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.820 "is_configured": false, 00:13:22.820 "data_offset": 0, 00:13:22.820 "data_size": 63488 00:13:22.820 }, 00:13:22.820 { 00:13:22.820 "name": "BaseBdev2", 00:13:22.820 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:22.820 "is_configured": true, 00:13:22.820 "data_offset": 2048, 00:13:22.820 "data_size": 63488 00:13:22.820 } 00:13:22.820 ] 00:13:22.820 }' 00:13:22.820 19:00:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.820 19:00:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.079 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.338 "name": "raid_bdev1", 00:13:23.338 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:23.338 "strip_size_kb": 0, 00:13:23.338 "state": "online", 00:13:23.338 "raid_level": "raid1", 00:13:23.338 "superblock": true, 00:13:23.338 "num_base_bdevs": 2, 00:13:23.338 "num_base_bdevs_discovered": 1, 00:13:23.338 "num_base_bdevs_operational": 1, 00:13:23.338 "base_bdevs_list": [ 00:13:23.338 { 00:13:23.338 "name": null, 00:13:23.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.338 "is_configured": false, 00:13:23.338 "data_offset": 0, 00:13:23.338 "data_size": 63488 00:13:23.338 }, 00:13:23.338 { 00:13:23.338 "name": "BaseBdev2", 00:13:23.338 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:23.338 "is_configured": true, 00:13:23.338 "data_offset": 2048, 00:13:23.338 "data_size": 63488 00:13:23.338 } 00:13:23.338 ] 00:13:23.338 }' 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.338 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.338 [2024-11-26 19:00:14.562427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.338 [2024-11-26 19:00:14.562669] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:23.338 [2024-11-26 19:00:14.562700] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:23.338 request: 00:13:23.338 { 00:13:23.338 "base_bdev": "BaseBdev1", 00:13:23.338 "raid_bdev": "raid_bdev1", 00:13:23.338 "method": "bdev_raid_add_base_bdev", 00:13:23.338 "req_id": 1 00:13:23.338 } 00:13:23.338 Got JSON-RPC error response 00:13:23.338 response: 00:13:23.338 { 00:13:23.338 "code": -22, 00:13:23.338 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:23.339 } 00:13:23.339 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:23.339 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:23.339 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:23.339 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:23.339 19:00:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:23.339 19:00:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.275 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.275 "name": "raid_bdev1", 00:13:24.275 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:24.275 "strip_size_kb": 0, 00:13:24.275 "state": "online", 00:13:24.275 "raid_level": "raid1", 00:13:24.275 "superblock": true, 00:13:24.275 "num_base_bdevs": 2, 00:13:24.275 "num_base_bdevs_discovered": 1, 00:13:24.275 "num_base_bdevs_operational": 1, 00:13:24.275 "base_bdevs_list": [ 00:13:24.275 { 00:13:24.275 "name": null, 00:13:24.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.275 "is_configured": false, 00:13:24.275 "data_offset": 0, 00:13:24.275 "data_size": 63488 00:13:24.275 }, 00:13:24.275 { 00:13:24.275 "name": "BaseBdev2", 00:13:24.275 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:24.276 "is_configured": true, 00:13:24.276 "data_offset": 2048, 00:13:24.276 "data_size": 63488 00:13:24.276 } 00:13:24.276 ] 00:13:24.276 }' 00:13:24.276 19:00:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.276 19:00:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.844 "name": "raid_bdev1", 00:13:24.844 "uuid": "64b556a4-26e3-4951-8759-89577cc0a423", 00:13:24.844 "strip_size_kb": 0, 00:13:24.844 "state": "online", 00:13:24.844 "raid_level": "raid1", 00:13:24.844 "superblock": true, 00:13:24.844 "num_base_bdevs": 2, 00:13:24.844 "num_base_bdevs_discovered": 1, 00:13:24.844 "num_base_bdevs_operational": 1, 00:13:24.844 "base_bdevs_list": [ 00:13:24.844 { 00:13:24.844 "name": null, 00:13:24.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.844 "is_configured": false, 00:13:24.844 "data_offset": 0, 00:13:24.844 "data_size": 63488 00:13:24.844 }, 00:13:24.844 { 00:13:24.844 "name": "BaseBdev2", 00:13:24.844 "uuid": "73f57e68-b67e-53fb-96af-f21b624d44e1", 00:13:24.844 "is_configured": true, 00:13:24.844 "data_offset": 2048, 00:13:24.844 "data_size": 63488 00:13:24.844 } 00:13:24.844 ] 00:13:24.844 }' 00:13:24.844 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75980 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75980 ']' 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75980 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75980 00:13:25.103 killing process with pid 75980 00:13:25.103 Received shutdown signal, test time was about 60.000000 seconds 00:13:25.103 00:13:25.103 Latency(us) 00:13:25.103 [2024-11-26T19:00:16.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.103 [2024-11-26T19:00:16.470Z] =================================================================================================================== 00:13:25.103 [2024-11-26T19:00:16.470Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75980' 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75980 00:13:25.103 [2024-11-26 19:00:16.305489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.103 19:00:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75980 00:13:25.103 [2024-11-26 19:00:16.305662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.103 [2024-11-26 19:00:16.305737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.103 [2024-11-26 19:00:16.305757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:25.362 [2024-11-26 19:00:16.581375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.296 19:00:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:26.296 00:13:26.296 real 0m27.016s 00:13:26.296 user 0m33.313s 00:13:26.296 sys 0m4.168s 00:13:26.296 ************************************ 00:13:26.296 END TEST raid_rebuild_test_sb 00:13:26.296 ************************************ 00:13:26.296 19:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.296 19:00:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.554 19:00:17 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:26.554 19:00:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:26.554 19:00:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.554 19:00:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:26.554 ************************************ 00:13:26.554 START TEST raid_rebuild_test_io 00:13:26.554 ************************************ 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:26.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76742 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76742 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76742 ']' 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.554 19:00:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.554 [2024-11-26 19:00:17.807758] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:13:26.554 [2024-11-26 19:00:17.808126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76742 ] 00:13:26.554 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:26.554 Zero copy mechanism will not be used. 00:13:26.812 [2024-11-26 19:00:17.984416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.812 [2024-11-26 19:00:18.133173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.070 [2024-11-26 19:00:18.338410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.070 [2024-11-26 19:00:18.338494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.636 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.636 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:27.636 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.636 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:27.636 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.637 BaseBdev1_malloc 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.637 [2024-11-26 19:00:18.840316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:27.637 [2024-11-26 19:00:18.840418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.637 [2024-11-26 19:00:18.840464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:27.637 [2024-11-26 19:00:18.840484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.637 [2024-11-26 19:00:18.843510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.637 [2024-11-26 19:00:18.843593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:27.637 BaseBdev1 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.637 BaseBdev2_malloc 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.637 [2024-11-26 19:00:18.896923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:27.637 [2024-11-26 19:00:18.897018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.637 [2024-11-26 19:00:18.897056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:27.637 [2024-11-26 19:00:18.897076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.637 [2024-11-26 19:00:18.900051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.637 [2024-11-26 19:00:18.900247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:27.637 BaseBdev2 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.637 spare_malloc 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.637 spare_delay 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.637 [2024-11-26 19:00:18.982333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.637 [2024-11-26 19:00:18.982612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.637 [2024-11-26 19:00:18.982660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:27.637 [2024-11-26 19:00:18.982681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.637 [2024-11-26 19:00:18.985707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.637 [2024-11-26 19:00:18.985887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.637 spare 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.637 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.637 [2024-11-26 19:00:18.994449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.637 [2024-11-26 19:00:18.997077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.637 [2024-11-26 19:00:18.997242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:27.637 [2024-11-26 19:00:18.997267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:27.637 [2024-11-26 19:00:18.997647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:27.637 [2024-11-26 19:00:18.997880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:27.637 [2024-11-26 19:00:18.997917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:27.637 [2024-11-26 19:00:18.998151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.898 19:00:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.898 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.898 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.898 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.898 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.898 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.898 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.898 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.898 "name": "raid_bdev1", 00:13:27.898 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:27.898 "strip_size_kb": 0, 00:13:27.898 "state": "online", 00:13:27.898 "raid_level": "raid1", 00:13:27.898 "superblock": false, 00:13:27.898 "num_base_bdevs": 2, 00:13:27.898 "num_base_bdevs_discovered": 2, 00:13:27.898 "num_base_bdevs_operational": 2, 00:13:27.898 "base_bdevs_list": [ 00:13:27.898 { 00:13:27.898 "name": "BaseBdev1", 00:13:27.898 "uuid": "2446b096-2e9e-560d-aaaa-4a7c74455042", 00:13:27.898 "is_configured": true, 00:13:27.898 "data_offset": 0, 00:13:27.898 "data_size": 65536 00:13:27.898 }, 00:13:27.898 { 00:13:27.898 "name": "BaseBdev2", 00:13:27.898 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:27.898 "is_configured": true, 00:13:27.898 "data_offset": 0, 00:13:27.898 "data_size": 65536 00:13:27.898 } 00:13:27.898 ] 00:13:27.898 }' 00:13:27.898 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.898 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.156 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:28.156 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.156 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.156 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:28.156 [2024-11-26 19:00:19.510871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.415 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.415 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:28.415 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.415 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:28.415 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.415 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.415 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.416 [2024-11-26 19:00:19.622532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.416 "name": "raid_bdev1", 00:13:28.416 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:28.416 "strip_size_kb": 0, 00:13:28.416 "state": "online", 00:13:28.416 "raid_level": "raid1", 00:13:28.416 "superblock": false, 00:13:28.416 "num_base_bdevs": 2, 00:13:28.416 "num_base_bdevs_discovered": 1, 00:13:28.416 "num_base_bdevs_operational": 1, 00:13:28.416 "base_bdevs_list": [ 00:13:28.416 { 00:13:28.416 "name": null, 00:13:28.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.416 "is_configured": false, 00:13:28.416 "data_offset": 0, 00:13:28.416 "data_size": 65536 00:13:28.416 }, 00:13:28.416 { 00:13:28.416 "name": "BaseBdev2", 00:13:28.416 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:28.416 "is_configured": true, 00:13:28.416 "data_offset": 0, 00:13:28.416 "data_size": 65536 00:13:28.416 } 00:13:28.416 ] 00:13:28.416 }' 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.416 19:00:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.416 [2024-11-26 19:00:19.754677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:28.416 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.416 Zero copy mechanism will not be used. 00:13:28.416 Running I/O for 60 seconds... 00:13:28.982 19:00:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.982 19:00:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.982 19:00:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.982 [2024-11-26 19:00:20.148228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.982 19:00:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.982 19:00:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:28.982 [2024-11-26 19:00:20.214213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:28.982 [2024-11-26 19:00:20.216876] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.982 [2024-11-26 19:00:20.320757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:28.982 [2024-11-26 19:00:20.321468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:29.239 [2024-11-26 19:00:20.542047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:29.240 [2024-11-26 19:00:20.542751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:29.788 175.00 IOPS, 525.00 MiB/s [2024-11-26T19:00:21.155Z] [2024-11-26 19:00:20.954025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:29.788 [2024-11-26 19:00:21.075464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.048 "name": "raid_bdev1", 00:13:30.048 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:30.048 "strip_size_kb": 0, 00:13:30.048 "state": "online", 00:13:30.048 "raid_level": "raid1", 00:13:30.048 "superblock": false, 00:13:30.048 "num_base_bdevs": 2, 00:13:30.048 "num_base_bdevs_discovered": 2, 00:13:30.048 "num_base_bdevs_operational": 2, 00:13:30.048 "process": { 00:13:30.048 "type": "rebuild", 00:13:30.048 "target": "spare", 00:13:30.048 "progress": { 00:13:30.048 "blocks": 12288, 00:13:30.048 "percent": 18 00:13:30.048 } 00:13:30.048 }, 00:13:30.048 "base_bdevs_list": [ 00:13:30.048 { 00:13:30.048 "name": "spare", 00:13:30.048 "uuid": "6dd52ced-4c3b-52d2-be4e-4a5894e1f2db", 00:13:30.048 "is_configured": true, 00:13:30.048 "data_offset": 0, 00:13:30.048 "data_size": 65536 00:13:30.048 }, 00:13:30.048 { 00:13:30.048 "name": "BaseBdev2", 00:13:30.048 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:30.048 "is_configured": true, 00:13:30.048 "data_offset": 0, 00:13:30.048 "data_size": 65536 00:13:30.048 } 00:13:30.048 ] 00:13:30.048 }' 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.048 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.048 [2024-11-26 19:00:21.357666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.307 [2024-11-26 19:00:21.413599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:30.307 [2024-11-26 19:00:21.414049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:30.307 [2024-11-26 19:00:21.524072] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:30.307 [2024-11-26 19:00:21.543504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.307 [2024-11-26 19:00:21.543868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.307 [2024-11-26 19:00:21.543932] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:30.307 [2024-11-26 19:00:21.596022] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.307 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.566 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.566 "name": "raid_bdev1", 00:13:30.566 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:30.566 "strip_size_kb": 0, 00:13:30.566 "state": "online", 00:13:30.566 "raid_level": "raid1", 00:13:30.566 "superblock": false, 00:13:30.566 "num_base_bdevs": 2, 00:13:30.566 "num_base_bdevs_discovered": 1, 00:13:30.566 "num_base_bdevs_operational": 1, 00:13:30.566 "base_bdevs_list": [ 00:13:30.566 { 00:13:30.566 "name": null, 00:13:30.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.566 "is_configured": false, 00:13:30.566 "data_offset": 0, 00:13:30.566 "data_size": 65536 00:13:30.566 }, 00:13:30.566 { 00:13:30.566 "name": "BaseBdev2", 00:13:30.566 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:30.566 "is_configured": true, 00:13:30.566 "data_offset": 0, 00:13:30.566 "data_size": 65536 00:13:30.566 } 00:13:30.566 ] 00:13:30.566 }' 00:13:30.566 19:00:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.566 19:00:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.824 130.00 IOPS, 390.00 MiB/s [2024-11-26T19:00:22.191Z] 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.824 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.824 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.824 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.824 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.824 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.824 19:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.824 19:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.824 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.824 19:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.083 "name": "raid_bdev1", 00:13:31.083 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:31.083 "strip_size_kb": 0, 00:13:31.083 "state": "online", 00:13:31.083 "raid_level": "raid1", 00:13:31.083 "superblock": false, 00:13:31.083 "num_base_bdevs": 2, 00:13:31.083 "num_base_bdevs_discovered": 1, 00:13:31.083 "num_base_bdevs_operational": 1, 00:13:31.083 "base_bdevs_list": [ 00:13:31.083 { 00:13:31.083 "name": null, 00:13:31.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.083 "is_configured": false, 00:13:31.083 "data_offset": 0, 00:13:31.083 "data_size": 65536 00:13:31.083 }, 00:13:31.083 { 00:13:31.083 "name": "BaseBdev2", 00:13:31.083 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:31.083 "is_configured": true, 00:13:31.083 "data_offset": 0, 00:13:31.083 "data_size": 65536 00:13:31.083 } 00:13:31.083 ] 00:13:31.083 }' 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.083 [2024-11-26 19:00:22.321621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.083 19:00:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:31.083 [2024-11-26 19:00:22.428516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:31.083 [2024-11-26 19:00:22.431227] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:31.342 [2024-11-26 19:00:22.558285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:31.600 138.00 IOPS, 414.00 MiB/s [2024-11-26T19:00:22.967Z] [2024-11-26 19:00:22.777408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:31.600 [2024-11-26 19:00:22.778032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:32.166 [2024-11-26 19:00:23.242887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:32.166 [2024-11-26 19:00:23.243297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.166 "name": "raid_bdev1", 00:13:32.166 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:32.166 "strip_size_kb": 0, 00:13:32.166 "state": "online", 00:13:32.166 "raid_level": "raid1", 00:13:32.166 "superblock": false, 00:13:32.166 "num_base_bdevs": 2, 00:13:32.166 "num_base_bdevs_discovered": 2, 00:13:32.166 "num_base_bdevs_operational": 2, 00:13:32.166 "process": { 00:13:32.166 "type": "rebuild", 00:13:32.166 "target": "spare", 00:13:32.166 "progress": { 00:13:32.166 "blocks": 10240, 00:13:32.166 "percent": 15 00:13:32.166 } 00:13:32.166 }, 00:13:32.166 "base_bdevs_list": [ 00:13:32.166 { 00:13:32.166 "name": "spare", 00:13:32.166 "uuid": "6dd52ced-4c3b-52d2-be4e-4a5894e1f2db", 00:13:32.166 "is_configured": true, 00:13:32.166 "data_offset": 0, 00:13:32.166 "data_size": 65536 00:13:32.166 }, 00:13:32.166 { 00:13:32.166 "name": "BaseBdev2", 00:13:32.166 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:32.166 "is_configured": true, 00:13:32.166 "data_offset": 0, 00:13:32.166 "data_size": 65536 00:13:32.166 } 00:13:32.166 ] 00:13:32.166 }' 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=442 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.166 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.424 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.424 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.424 19:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.424 19:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.424 19:00:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.424 [2024-11-26 19:00:23.569037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:32.424 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.424 "name": "raid_bdev1", 00:13:32.424 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:32.424 "strip_size_kb": 0, 00:13:32.425 "state": "online", 00:13:32.425 "raid_level": "raid1", 00:13:32.425 "superblock": false, 00:13:32.425 "num_base_bdevs": 2, 00:13:32.425 "num_base_bdevs_discovered": 2, 00:13:32.425 "num_base_bdevs_operational": 2, 00:13:32.425 "process": { 00:13:32.425 "type": "rebuild", 00:13:32.425 "target": "spare", 00:13:32.425 "progress": { 00:13:32.425 "blocks": 12288, 00:13:32.425 "percent": 18 00:13:32.425 } 00:13:32.425 }, 00:13:32.425 "base_bdevs_list": [ 00:13:32.425 { 00:13:32.425 "name": "spare", 00:13:32.425 "uuid": "6dd52ced-4c3b-52d2-be4e-4a5894e1f2db", 00:13:32.425 "is_configured": true, 00:13:32.425 "data_offset": 0, 00:13:32.425 "data_size": 65536 00:13:32.425 }, 00:13:32.425 { 00:13:32.425 "name": "BaseBdev2", 00:13:32.425 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:32.425 "is_configured": true, 00:13:32.425 "data_offset": 0, 00:13:32.425 "data_size": 65536 00:13:32.425 } 00:13:32.425 ] 00:13:32.425 }' 00:13:32.425 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.425 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.425 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.425 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.425 19:00:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.425 [2024-11-26 19:00:23.680855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:32.683 118.25 IOPS, 354.75 MiB/s [2024-11-26T19:00:24.050Z] [2024-11-26 19:00:23.903476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:32.683 [2024-11-26 19:00:24.022151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:32.683 [2024-11-26 19:00:24.022808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.616 "name": "raid_bdev1", 00:13:33.616 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:33.616 "strip_size_kb": 0, 00:13:33.616 "state": "online", 00:13:33.616 "raid_level": "raid1", 00:13:33.616 "superblock": false, 00:13:33.616 "num_base_bdevs": 2, 00:13:33.616 "num_base_bdevs_discovered": 2, 00:13:33.616 "num_base_bdevs_operational": 2, 00:13:33.616 "process": { 00:13:33.616 "type": "rebuild", 00:13:33.616 "target": "spare", 00:13:33.616 "progress": { 00:13:33.616 "blocks": 30720, 00:13:33.616 "percent": 46 00:13:33.616 } 00:13:33.616 }, 00:13:33.616 "base_bdevs_list": [ 00:13:33.616 { 00:13:33.616 "name": "spare", 00:13:33.616 "uuid": "6dd52ced-4c3b-52d2-be4e-4a5894e1f2db", 00:13:33.616 "is_configured": true, 00:13:33.616 "data_offset": 0, 00:13:33.616 "data_size": 65536 00:13:33.616 }, 00:13:33.616 { 00:13:33.616 "name": "BaseBdev2", 00:13:33.616 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:33.616 "is_configured": true, 00:13:33.616 "data_offset": 0, 00:13:33.616 "data_size": 65536 00:13:33.616 } 00:13:33.616 ] 00:13:33.616 }' 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.616 109.00 IOPS, 327.00 MiB/s [2024-11-26T19:00:24.983Z] 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.616 19:00:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:34.551 98.00 IOPS, 294.00 MiB/s [2024-11-26T19:00:25.918Z] 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.551 19:00:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.809 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.809 "name": "raid_bdev1", 00:13:34.809 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:34.809 "strip_size_kb": 0, 00:13:34.809 "state": "online", 00:13:34.809 "raid_level": "raid1", 00:13:34.809 "superblock": false, 00:13:34.809 "num_base_bdevs": 2, 00:13:34.809 "num_base_bdevs_discovered": 2, 00:13:34.809 "num_base_bdevs_operational": 2, 00:13:34.809 "process": { 00:13:34.809 "type": "rebuild", 00:13:34.809 "target": "spare", 00:13:34.809 "progress": { 00:13:34.809 "blocks": 53248, 00:13:34.809 "percent": 81 00:13:34.809 } 00:13:34.809 }, 00:13:34.809 "base_bdevs_list": [ 00:13:34.809 { 00:13:34.809 "name": "spare", 00:13:34.809 "uuid": "6dd52ced-4c3b-52d2-be4e-4a5894e1f2db", 00:13:34.809 "is_configured": true, 00:13:34.809 "data_offset": 0, 00:13:34.809 "data_size": 65536 00:13:34.809 }, 00:13:34.809 { 00:13:34.810 "name": "BaseBdev2", 00:13:34.810 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:34.810 "is_configured": true, 00:13:34.810 "data_offset": 0, 00:13:34.810 "data_size": 65536 00:13:34.810 } 00:13:34.810 ] 00:13:34.810 }' 00:13:34.810 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.810 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.810 19:00:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.810 [2024-11-26 19:00:26.031730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:34.810 19:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.810 19:00:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.375 [2024-11-26 19:00:26.481709] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:35.375 [2024-11-26 19:00:26.589577] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:35.375 [2024-11-26 19:00:26.593130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.890 88.57 IOPS, 265.71 MiB/s [2024-11-26T19:00:27.257Z] 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.890 "name": "raid_bdev1", 00:13:35.890 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:35.890 "strip_size_kb": 0, 00:13:35.890 "state": "online", 00:13:35.890 "raid_level": "raid1", 00:13:35.890 "superblock": false, 00:13:35.890 "num_base_bdevs": 2, 00:13:35.890 "num_base_bdevs_discovered": 2, 00:13:35.890 "num_base_bdevs_operational": 2, 00:13:35.890 "base_bdevs_list": [ 00:13:35.890 { 00:13:35.890 "name": "spare", 00:13:35.890 "uuid": "6dd52ced-4c3b-52d2-be4e-4a5894e1f2db", 00:13:35.890 "is_configured": true, 00:13:35.890 "data_offset": 0, 00:13:35.890 "data_size": 65536 00:13:35.890 }, 00:13:35.890 { 00:13:35.890 "name": "BaseBdev2", 00:13:35.890 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:35.890 "is_configured": true, 00:13:35.890 "data_offset": 0, 00:13:35.890 "data_size": 65536 00:13:35.890 } 00:13:35.890 ] 00:13:35.890 }' 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:35.890 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.891 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.148 "name": "raid_bdev1", 00:13:36.148 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:36.148 "strip_size_kb": 0, 00:13:36.148 "state": "online", 00:13:36.148 "raid_level": "raid1", 00:13:36.148 "superblock": false, 00:13:36.148 "num_base_bdevs": 2, 00:13:36.148 "num_base_bdevs_discovered": 2, 00:13:36.148 "num_base_bdevs_operational": 2, 00:13:36.148 "base_bdevs_list": [ 00:13:36.148 { 00:13:36.148 "name": "spare", 00:13:36.148 "uuid": "6dd52ced-4c3b-52d2-be4e-4a5894e1f2db", 00:13:36.148 "is_configured": true, 00:13:36.148 "data_offset": 0, 00:13:36.148 "data_size": 65536 00:13:36.148 }, 00:13:36.148 { 00:13:36.148 "name": "BaseBdev2", 00:13:36.148 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:36.148 "is_configured": true, 00:13:36.148 "data_offset": 0, 00:13:36.148 "data_size": 65536 00:13:36.148 } 00:13:36.148 ] 00:13:36.148 }' 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.148 "name": "raid_bdev1", 00:13:36.148 "uuid": "49302fd5-d385-4853-9ce0-8a0401df6f48", 00:13:36.148 "strip_size_kb": 0, 00:13:36.148 "state": "online", 00:13:36.148 "raid_level": "raid1", 00:13:36.148 "superblock": false, 00:13:36.148 "num_base_bdevs": 2, 00:13:36.148 "num_base_bdevs_discovered": 2, 00:13:36.148 "num_base_bdevs_operational": 2, 00:13:36.148 "base_bdevs_list": [ 00:13:36.148 { 00:13:36.148 "name": "spare", 00:13:36.148 "uuid": "6dd52ced-4c3b-52d2-be4e-4a5894e1f2db", 00:13:36.148 "is_configured": true, 00:13:36.148 "data_offset": 0, 00:13:36.148 "data_size": 65536 00:13:36.148 }, 00:13:36.148 { 00:13:36.148 "name": "BaseBdev2", 00:13:36.148 "uuid": "305d6b94-33ef-5961-b411-e9f9fed44e8d", 00:13:36.148 "is_configured": true, 00:13:36.148 "data_offset": 0, 00:13:36.148 "data_size": 65536 00:13:36.148 } 00:13:36.148 ] 00:13:36.148 }' 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.148 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.715 82.75 IOPS, 248.25 MiB/s [2024-11-26T19:00:28.082Z] 19:00:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.715 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.715 19:00:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.715 [2024-11-26 19:00:27.924806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.715 [2024-11-26 19:00:27.924863] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.715 00:13:36.715 Latency(us) 00:13:36.715 [2024-11-26T19:00:28.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.715 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:36.715 raid_bdev1 : 8.27 80.80 242.40 0.00 0.00 16951.29 284.86 119632.99 00:13:36.715 [2024-11-26T19:00:28.082Z] =================================================================================================================== 00:13:36.715 [2024-11-26T19:00:28.082Z] Total : 80.80 242.40 0.00 0.00 16951.29 284.86 119632.99 00:13:36.715 [2024-11-26 19:00:28.045589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.715 [2024-11-26 19:00:28.045672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.715 [2024-11-26 19:00:28.045791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.715 [2024-11-26 19:00:28.045813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:36.715 { 00:13:36.715 "results": [ 00:13:36.715 { 00:13:36.715 "job": "raid_bdev1", 00:13:36.715 "core_mask": "0x1", 00:13:36.715 "workload": "randrw", 00:13:36.715 "percentage": 50, 00:13:36.715 "status": "finished", 00:13:36.715 "queue_depth": 2, 00:13:36.715 "io_size": 3145728, 00:13:36.715 "runtime": 8.267217, 00:13:36.715 "iops": 80.80107247698953, 00:13:36.715 "mibps": 242.4032174309686, 00:13:36.715 "io_failed": 0, 00:13:36.715 "io_timeout": 0, 00:13:36.715 "avg_latency_us": 16951.292934131736, 00:13:36.715 "min_latency_us": 284.85818181818183, 00:13:36.715 "max_latency_us": 119632.98909090909 00:13:36.715 } 00:13:36.715 ], 00:13:36.715 "core_count": 1 00:13:36.715 } 00:13:36.715 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.715 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.715 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.715 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:36.715 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.715 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.974 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:37.231 /dev/nbd0 00:13:37.231 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.231 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.231 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:37.231 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:37.231 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:37.231 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:37.231 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:37.231 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:37.231 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.232 1+0 records in 00:13:37.232 1+0 records out 00:13:37.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381396 s, 10.7 MB/s 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.232 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:37.490 /dev/nbd1 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.490 1+0 records in 00:13:37.490 1+0 records out 00:13:37.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340876 s, 12.0 MB/s 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.490 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:37.491 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.491 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:37.491 19:00:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:37.491 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.491 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.491 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:37.754 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:37.754 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.754 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:37.754 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.754 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:37.754 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.754 19:00:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.031 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76742 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76742 ']' 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76742 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76742 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76742' 00:13:38.292 killing process with pid 76742 00:13:38.292 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76742 00:13:38.292 Received shutdown signal, test time was about 9.797106 seconds 00:13:38.292 00:13:38.292 Latency(us) 00:13:38.292 [2024-11-26T19:00:29.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.292 [2024-11-26T19:00:29.659Z] =================================================================================================================== 00:13:38.292 [2024-11-26T19:00:29.660Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:38.293 [2024-11-26 19:00:29.554608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.293 19:00:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76742 00:13:38.551 [2024-11-26 19:00:29.765606] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:39.927 00:13:39.927 real 0m13.170s 00:13:39.927 user 0m17.343s 00:13:39.927 sys 0m1.389s 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.927 ************************************ 00:13:39.927 END TEST raid_rebuild_test_io 00:13:39.927 ************************************ 00:13:39.927 19:00:30 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:39.927 19:00:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:39.927 19:00:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.927 19:00:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.927 ************************************ 00:13:39.927 START TEST raid_rebuild_test_sb_io 00:13:39.927 ************************************ 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77126 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77126 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77126 ']' 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.927 19:00:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.927 [2024-11-26 19:00:31.033105] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:13:39.927 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:39.927 Zero copy mechanism will not be used. 00:13:39.927 [2024-11-26 19:00:31.033267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77126 ] 00:13:39.927 [2024-11-26 19:00:31.206151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.186 [2024-11-26 19:00:31.335750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.186 [2024-11-26 19:00:31.539396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.186 [2024-11-26 19:00:31.539448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.752 BaseBdev1_malloc 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.752 [2024-11-26 19:00:32.093587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:40.752 [2024-11-26 19:00:32.093663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.752 [2024-11-26 19:00:32.093691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:40.752 [2024-11-26 19:00:32.093709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.752 [2024-11-26 19:00:32.096588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.752 [2024-11-26 19:00:32.096636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:40.752 BaseBdev1 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.752 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.011 BaseBdev2_malloc 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.012 [2024-11-26 19:00:32.147059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:41.012 [2024-11-26 19:00:32.147135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.012 [2024-11-26 19:00:32.147165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:41.012 [2024-11-26 19:00:32.147182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.012 [2024-11-26 19:00:32.150031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.012 [2024-11-26 19:00:32.150077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:41.012 BaseBdev2 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.012 spare_malloc 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.012 spare_delay 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.012 [2024-11-26 19:00:32.221224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.012 [2024-11-26 19:00:32.221312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.012 [2024-11-26 19:00:32.221342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:41.012 [2024-11-26 19:00:32.221359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.012 [2024-11-26 19:00:32.224417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.012 [2024-11-26 19:00:32.224482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.012 spare 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.012 [2024-11-26 19:00:32.229379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.012 [2024-11-26 19:00:32.231895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.012 [2024-11-26 19:00:32.232151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:41.012 [2024-11-26 19:00:32.232175] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.012 [2024-11-26 19:00:32.232490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:41.012 [2024-11-26 19:00:32.232706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:41.012 [2024-11-26 19:00:32.232721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:41.012 [2024-11-26 19:00:32.232930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.012 "name": "raid_bdev1", 00:13:41.012 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:41.012 "strip_size_kb": 0, 00:13:41.012 "state": "online", 00:13:41.012 "raid_level": "raid1", 00:13:41.012 "superblock": true, 00:13:41.012 "num_base_bdevs": 2, 00:13:41.012 "num_base_bdevs_discovered": 2, 00:13:41.012 "num_base_bdevs_operational": 2, 00:13:41.012 "base_bdevs_list": [ 00:13:41.012 { 00:13:41.012 "name": "BaseBdev1", 00:13:41.012 "uuid": "cbf31e4c-e279-5da6-a8cf-d7d46887f930", 00:13:41.012 "is_configured": true, 00:13:41.012 "data_offset": 2048, 00:13:41.012 "data_size": 63488 00:13:41.012 }, 00:13:41.012 { 00:13:41.012 "name": "BaseBdev2", 00:13:41.012 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:41.012 "is_configured": true, 00:13:41.012 "data_offset": 2048, 00:13:41.012 "data_size": 63488 00:13:41.012 } 00:13:41.012 ] 00:13:41.012 }' 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.012 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.580 [2024-11-26 19:00:32.749888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:41.580 [2024-11-26 19:00:32.857555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.580 "name": "raid_bdev1", 00:13:41.580 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:41.580 "strip_size_kb": 0, 00:13:41.580 "state": "online", 00:13:41.580 "raid_level": "raid1", 00:13:41.580 "superblock": true, 00:13:41.580 "num_base_bdevs": 2, 00:13:41.580 "num_base_bdevs_discovered": 1, 00:13:41.580 "num_base_bdevs_operational": 1, 00:13:41.580 "base_bdevs_list": [ 00:13:41.580 { 00:13:41.580 "name": null, 00:13:41.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.580 "is_configured": false, 00:13:41.580 "data_offset": 0, 00:13:41.580 "data_size": 63488 00:13:41.580 }, 00:13:41.580 { 00:13:41.580 "name": "BaseBdev2", 00:13:41.580 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:41.580 "is_configured": true, 00:13:41.580 "data_offset": 2048, 00:13:41.580 "data_size": 63488 00:13:41.580 } 00:13:41.580 ] 00:13:41.580 }' 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.580 19:00:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.839 [2024-11-26 19:00:32.990013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:41.839 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:41.839 Zero copy mechanism will not be used. 00:13:41.839 Running I/O for 60 seconds... 00:13:42.098 19:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.098 19:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.098 19:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.098 [2024-11-26 19:00:33.377646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.098 19:00:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.098 19:00:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:42.358 [2024-11-26 19:00:33.469439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:42.358 [2024-11-26 19:00:33.472238] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.358 [2024-11-26 19:00:33.590324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.358 [2024-11-26 19:00:33.591019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.617 [2024-11-26 19:00:33.811372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.617 [2024-11-26 19:00:33.811781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.136 117.00 IOPS, 351.00 MiB/s [2024-11-26T19:00:34.503Z] [2024-11-26 19:00:34.278825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.136 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.394 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.394 "name": "raid_bdev1", 00:13:43.394 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:43.394 "strip_size_kb": 0, 00:13:43.394 "state": "online", 00:13:43.394 "raid_level": "raid1", 00:13:43.394 "superblock": true, 00:13:43.394 "num_base_bdevs": 2, 00:13:43.394 "num_base_bdevs_discovered": 2, 00:13:43.394 "num_base_bdevs_operational": 2, 00:13:43.394 "process": { 00:13:43.394 "type": "rebuild", 00:13:43.394 "target": "spare", 00:13:43.394 "progress": { 00:13:43.394 "blocks": 12288, 00:13:43.394 "percent": 19 00:13:43.394 } 00:13:43.394 }, 00:13:43.394 "base_bdevs_list": [ 00:13:43.394 { 00:13:43.394 "name": "spare", 00:13:43.394 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:43.394 "is_configured": true, 00:13:43.394 "data_offset": 2048, 00:13:43.394 "data_size": 63488 00:13:43.394 }, 00:13:43.394 { 00:13:43.394 "name": "BaseBdev2", 00:13:43.394 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:43.394 "is_configured": true, 00:13:43.394 "data_offset": 2048, 00:13:43.394 "data_size": 63488 00:13:43.394 } 00:13:43.394 ] 00:13:43.394 }' 00:13:43.394 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.394 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.394 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.394 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.394 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.394 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.394 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.394 [2024-11-26 19:00:34.616829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.653 [2024-11-26 19:00:34.791441] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.653 [2024-11-26 19:00:34.802325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.653 [2024-11-26 19:00:34.802430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.653 [2024-11-26 19:00:34.802452] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.653 [2024-11-26 19:00:34.846475] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.653 "name": "raid_bdev1", 00:13:43.653 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:43.653 "strip_size_kb": 0, 00:13:43.653 "state": "online", 00:13:43.653 "raid_level": "raid1", 00:13:43.653 "superblock": true, 00:13:43.653 "num_base_bdevs": 2, 00:13:43.653 "num_base_bdevs_discovered": 1, 00:13:43.653 "num_base_bdevs_operational": 1, 00:13:43.653 "base_bdevs_list": [ 00:13:43.653 { 00:13:43.653 "name": null, 00:13:43.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.653 "is_configured": false, 00:13:43.653 "data_offset": 0, 00:13:43.653 "data_size": 63488 00:13:43.653 }, 00:13:43.653 { 00:13:43.653 "name": "BaseBdev2", 00:13:43.653 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:43.653 "is_configured": true, 00:13:43.653 "data_offset": 2048, 00:13:43.653 "data_size": 63488 00:13:43.653 } 00:13:43.653 ] 00:13:43.653 }' 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.653 19:00:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.219 105.50 IOPS, 316.50 MiB/s [2024-11-26T19:00:35.586Z] 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.219 "name": "raid_bdev1", 00:13:44.219 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:44.219 "strip_size_kb": 0, 00:13:44.219 "state": "online", 00:13:44.219 "raid_level": "raid1", 00:13:44.219 "superblock": true, 00:13:44.219 "num_base_bdevs": 2, 00:13:44.219 "num_base_bdevs_discovered": 1, 00:13:44.219 "num_base_bdevs_operational": 1, 00:13:44.219 "base_bdevs_list": [ 00:13:44.219 { 00:13:44.219 "name": null, 00:13:44.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.219 "is_configured": false, 00:13:44.219 "data_offset": 0, 00:13:44.219 "data_size": 63488 00:13:44.219 }, 00:13:44.219 { 00:13:44.219 "name": "BaseBdev2", 00:13:44.219 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:44.219 "is_configured": true, 00:13:44.219 "data_offset": 2048, 00:13:44.219 "data_size": 63488 00:13:44.219 } 00:13:44.219 ] 00:13:44.219 }' 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.219 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.219 [2024-11-26 19:00:35.580536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.477 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.477 19:00:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:44.477 [2024-11-26 19:00:35.667927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:44.477 [2024-11-26 19:00:35.670543] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.477 [2024-11-26 19:00:35.789394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.477 [2024-11-26 19:00:35.790081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.735 [2024-11-26 19:00:35.918148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.735 [2024-11-26 19:00:35.918536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.993 132.33 IOPS, 397.00 MiB/s [2024-11-26T19:00:36.360Z] [2024-11-26 19:00:36.294640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:45.251 [2024-11-26 19:00:36.416616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:45.251 [2024-11-26 19:00:36.417034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.509 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.509 "name": "raid_bdev1", 00:13:45.509 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:45.509 "strip_size_kb": 0, 00:13:45.509 "state": "online", 00:13:45.509 "raid_level": "raid1", 00:13:45.509 "superblock": true, 00:13:45.509 "num_base_bdevs": 2, 00:13:45.509 "num_base_bdevs_discovered": 2, 00:13:45.509 "num_base_bdevs_operational": 2, 00:13:45.509 "process": { 00:13:45.509 "type": "rebuild", 00:13:45.509 "target": "spare", 00:13:45.509 "progress": { 00:13:45.509 "blocks": 12288, 00:13:45.509 "percent": 19 00:13:45.509 } 00:13:45.509 }, 00:13:45.510 "base_bdevs_list": [ 00:13:45.510 { 00:13:45.510 "name": "spare", 00:13:45.510 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:45.510 "is_configured": true, 00:13:45.510 "data_offset": 2048, 00:13:45.510 "data_size": 63488 00:13:45.510 }, 00:13:45.510 { 00:13:45.510 "name": "BaseBdev2", 00:13:45.510 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:45.510 "is_configured": true, 00:13:45.510 "data_offset": 2048, 00:13:45.510 "data_size": 63488 00:13:45.510 } 00:13:45.510 ] 00:13:45.510 }' 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:45.510 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=455 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.510 "name": "raid_bdev1", 00:13:45.510 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:45.510 "strip_size_kb": 0, 00:13:45.510 "state": "online", 00:13:45.510 "raid_level": "raid1", 00:13:45.510 "superblock": true, 00:13:45.510 "num_base_bdevs": 2, 00:13:45.510 "num_base_bdevs_discovered": 2, 00:13:45.510 "num_base_bdevs_operational": 2, 00:13:45.510 "process": { 00:13:45.510 "type": "rebuild", 00:13:45.510 "target": "spare", 00:13:45.510 "progress": { 00:13:45.510 "blocks": 14336, 00:13:45.510 "percent": 22 00:13:45.510 } 00:13:45.510 }, 00:13:45.510 "base_bdevs_list": [ 00:13:45.510 { 00:13:45.510 "name": "spare", 00:13:45.510 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:45.510 "is_configured": true, 00:13:45.510 "data_offset": 2048, 00:13:45.510 "data_size": 63488 00:13:45.510 }, 00:13:45.510 { 00:13:45.510 "name": "BaseBdev2", 00:13:45.510 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:45.510 "is_configured": true, 00:13:45.510 "data_offset": 2048, 00:13:45.510 "data_size": 63488 00:13:45.510 } 00:13:45.510 ] 00:13:45.510 }' 00:13:45.510 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.510 [2024-11-26 19:00:36.867797] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:45.950 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.950 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.950 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.950 19:00:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.950 119.50 IOPS, 358.50 MiB/s [2024-11-26T19:00:37.317Z] [2024-11-26 19:00:37.116428] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:45.950 [2024-11-26 19:00:37.117154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:46.208 [2024-11-26 19:00:37.330769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:46.208 [2024-11-26 19:00:37.569504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.771 19:00:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.771 19:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.771 "name": "raid_bdev1", 00:13:46.771 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:46.771 "strip_size_kb": 0, 00:13:46.771 "state": "online", 00:13:46.771 "raid_level": "raid1", 00:13:46.771 "superblock": true, 00:13:46.771 "num_base_bdevs": 2, 00:13:46.771 "num_base_bdevs_discovered": 2, 00:13:46.771 "num_base_bdevs_operational": 2, 00:13:46.771 "process": { 00:13:46.771 "type": "rebuild", 00:13:46.771 "target": "spare", 00:13:46.771 "progress": { 00:13:46.771 "blocks": 30720, 00:13:46.771 "percent": 48 00:13:46.771 } 00:13:46.771 }, 00:13:46.771 "base_bdevs_list": [ 00:13:46.771 { 00:13:46.772 "name": "spare", 00:13:46.772 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:46.772 "is_configured": true, 00:13:46.772 "data_offset": 2048, 00:13:46.772 "data_size": 63488 00:13:46.772 }, 00:13:46.772 { 00:13:46.772 "name": "BaseBdev2", 00:13:46.772 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:46.772 "is_configured": true, 00:13:46.772 "data_offset": 2048, 00:13:46.772 "data_size": 63488 00:13:46.772 } 00:13:46.772 ] 00:13:46.772 }' 00:13:46.772 19:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.772 [2024-11-26 19:00:38.025232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:46.772 106.60 IOPS, 319.80 MiB/s [2024-11-26T19:00:38.139Z] 19:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.772 19:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.772 19:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.772 19:00:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.704 [2024-11-26 19:00:38.701052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:47.963 96.00 IOPS, 288.00 MiB/s [2024-11-26T19:00:39.330Z] 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.963 "name": "raid_bdev1", 00:13:47.963 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:47.963 "strip_size_kb": 0, 00:13:47.963 "state": "online", 00:13:47.963 "raid_level": "raid1", 00:13:47.963 "superblock": true, 00:13:47.963 "num_base_bdevs": 2, 00:13:47.963 "num_base_bdevs_discovered": 2, 00:13:47.963 "num_base_bdevs_operational": 2, 00:13:47.963 "process": { 00:13:47.963 "type": "rebuild", 00:13:47.963 "target": "spare", 00:13:47.963 "progress": { 00:13:47.963 "blocks": 51200, 00:13:47.963 "percent": 80 00:13:47.963 } 00:13:47.963 }, 00:13:47.963 "base_bdevs_list": [ 00:13:47.963 { 00:13:47.963 "name": "spare", 00:13:47.963 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:47.963 "is_configured": true, 00:13:47.963 "data_offset": 2048, 00:13:47.963 "data_size": 63488 00:13:47.963 }, 00:13:47.963 { 00:13:47.963 "name": "BaseBdev2", 00:13:47.963 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:47.963 "is_configured": true, 00:13:47.963 "data_offset": 2048, 00:13:47.963 "data_size": 63488 00:13:47.963 } 00:13:47.963 ] 00:13:47.963 }' 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.963 19:00:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.222 [2024-11-26 19:00:39.496597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:48.480 [2024-11-26 19:00:39.838039] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:48.739 [2024-11-26 19:00:39.945228] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:48.739 [2024-11-26 19:00:39.948699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.998 87.14 IOPS, 261.43 MiB/s [2024-11-26T19:00:40.365Z] 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.998 "name": "raid_bdev1", 00:13:48.998 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:48.998 "strip_size_kb": 0, 00:13:48.998 "state": "online", 00:13:48.998 "raid_level": "raid1", 00:13:48.998 "superblock": true, 00:13:48.998 "num_base_bdevs": 2, 00:13:48.998 "num_base_bdevs_discovered": 2, 00:13:48.998 "num_base_bdevs_operational": 2, 00:13:48.998 "base_bdevs_list": [ 00:13:48.998 { 00:13:48.998 "name": "spare", 00:13:48.998 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:48.998 "is_configured": true, 00:13:48.998 "data_offset": 2048, 00:13:48.998 "data_size": 63488 00:13:48.998 }, 00:13:48.998 { 00:13:48.998 "name": "BaseBdev2", 00:13:48.998 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:48.998 "is_configured": true, 00:13:48.998 "data_offset": 2048, 00:13:48.998 "data_size": 63488 00:13:48.998 } 00:13:48.998 ] 00:13:48.998 }' 00:13:48.998 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.257 "name": "raid_bdev1", 00:13:49.257 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:49.257 "strip_size_kb": 0, 00:13:49.257 "state": "online", 00:13:49.257 "raid_level": "raid1", 00:13:49.257 "superblock": true, 00:13:49.257 "num_base_bdevs": 2, 00:13:49.257 "num_base_bdevs_discovered": 2, 00:13:49.257 "num_base_bdevs_operational": 2, 00:13:49.257 "base_bdevs_list": [ 00:13:49.257 { 00:13:49.257 "name": "spare", 00:13:49.257 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:49.257 "is_configured": true, 00:13:49.257 "data_offset": 2048, 00:13:49.257 "data_size": 63488 00:13:49.257 }, 00:13:49.257 { 00:13:49.257 "name": "BaseBdev2", 00:13:49.257 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:49.257 "is_configured": true, 00:13:49.257 "data_offset": 2048, 00:13:49.257 "data_size": 63488 00:13:49.257 } 00:13:49.257 ] 00:13:49.257 }' 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.257 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.516 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.516 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.516 "name": "raid_bdev1", 00:13:49.516 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:49.516 "strip_size_kb": 0, 00:13:49.516 "state": "online", 00:13:49.516 "raid_level": "raid1", 00:13:49.516 "superblock": true, 00:13:49.516 "num_base_bdevs": 2, 00:13:49.516 "num_base_bdevs_discovered": 2, 00:13:49.516 "num_base_bdevs_operational": 2, 00:13:49.516 "base_bdevs_list": [ 00:13:49.516 { 00:13:49.516 "name": "spare", 00:13:49.516 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:49.516 "is_configured": true, 00:13:49.516 "data_offset": 2048, 00:13:49.516 "data_size": 63488 00:13:49.516 }, 00:13:49.516 { 00:13:49.516 "name": "BaseBdev2", 00:13:49.516 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:49.516 "is_configured": true, 00:13:49.516 "data_offset": 2048, 00:13:49.516 "data_size": 63488 00:13:49.516 } 00:13:49.516 ] 00:13:49.516 }' 00:13:49.516 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.516 19:00:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.775 81.50 IOPS, 244.50 MiB/s [2024-11-26T19:00:41.142Z] 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.775 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.775 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.775 [2024-11-26 19:00:41.093856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.775 [2024-11-26 19:00:41.093893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.034 00:13:50.034 Latency(us) 00:13:50.034 [2024-11-26T19:00:41.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.034 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:50.034 raid_bdev1 : 8.17 80.43 241.28 0.00 0.00 16834.20 273.69 112006.98 00:13:50.034 [2024-11-26T19:00:41.401Z] =================================================================================================================== 00:13:50.034 [2024-11-26T19:00:41.401Z] Total : 80.43 241.28 0.00 0.00 16834.20 273.69 112006.98 00:13:50.034 [2024-11-26 19:00:41.183497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.034 [2024-11-26 19:00:41.183564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.034 [2024-11-26 19:00:41.183689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.034 [2024-11-26 19:00:41.183707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:50.034 { 00:13:50.034 "results": [ 00:13:50.034 { 00:13:50.034 "job": "raid_bdev1", 00:13:50.034 "core_mask": "0x1", 00:13:50.034 "workload": "randrw", 00:13:50.034 "percentage": 50, 00:13:50.034 "status": "finished", 00:13:50.034 "queue_depth": 2, 00:13:50.034 "io_size": 3145728, 00:13:50.034 "runtime": 8.168923, 00:13:50.034 "iops": 80.42675882732644, 00:13:50.034 "mibps": 241.2802764819793, 00:13:50.034 "io_failed": 0, 00:13:50.034 "io_timeout": 0, 00:13:50.034 "avg_latency_us": 16834.198688252385, 00:13:50.034 "min_latency_us": 273.6872727272727, 00:13:50.034 "max_latency_us": 112006.98181818181 00:13:50.034 } 00:13:50.034 ], 00:13:50.034 "core_count": 1 00:13:50.034 } 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.034 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:50.293 /dev/nbd0 00:13:50.293 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.293 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.293 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:50.293 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:50.293 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.294 1+0 records in 00:13:50.294 1+0 records out 00:13:50.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612027 s, 6.7 MB/s 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.294 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:50.861 /dev/nbd1 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.861 1+0 records in 00:13:50.861 1+0 records out 00:13:50.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304398 s, 13.5 MB/s 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.861 19:00:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:50.861 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:50.861 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.861 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:50.861 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.861 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.861 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.861 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.120 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.378 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.378 [2024-11-26 19:00:42.722473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:51.378 [2024-11-26 19:00:42.722547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.378 [2024-11-26 19:00:42.722579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:51.378 [2024-11-26 19:00:42.722593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.378 [2024-11-26 19:00:42.725809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.379 [2024-11-26 19:00:42.726026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:51.379 [2024-11-26 19:00:42.726170] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:51.379 [2024-11-26 19:00:42.726234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.379 [2024-11-26 19:00:42.726457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.379 spare 00:13:51.379 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.379 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:51.379 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.379 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.637 [2024-11-26 19:00:42.826688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:51.637 [2024-11-26 19:00:42.826723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:51.637 [2024-11-26 19:00:42.827083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:51.637 [2024-11-26 19:00:42.827312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:51.637 [2024-11-26 19:00:42.827356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:51.637 [2024-11-26 19:00:42.827558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.637 "name": "raid_bdev1", 00:13:51.637 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:51.637 "strip_size_kb": 0, 00:13:51.637 "state": "online", 00:13:51.637 "raid_level": "raid1", 00:13:51.637 "superblock": true, 00:13:51.637 "num_base_bdevs": 2, 00:13:51.637 "num_base_bdevs_discovered": 2, 00:13:51.637 "num_base_bdevs_operational": 2, 00:13:51.637 "base_bdevs_list": [ 00:13:51.637 { 00:13:51.637 "name": "spare", 00:13:51.637 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:51.637 "is_configured": true, 00:13:51.637 "data_offset": 2048, 00:13:51.637 "data_size": 63488 00:13:51.637 }, 00:13:51.637 { 00:13:51.637 "name": "BaseBdev2", 00:13:51.637 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:51.637 "is_configured": true, 00:13:51.637 "data_offset": 2048, 00:13:51.637 "data_size": 63488 00:13:51.637 } 00:13:51.637 ] 00:13:51.637 }' 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.637 19:00:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.204 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.205 "name": "raid_bdev1", 00:13:52.205 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:52.205 "strip_size_kb": 0, 00:13:52.205 "state": "online", 00:13:52.205 "raid_level": "raid1", 00:13:52.205 "superblock": true, 00:13:52.205 "num_base_bdevs": 2, 00:13:52.205 "num_base_bdevs_discovered": 2, 00:13:52.205 "num_base_bdevs_operational": 2, 00:13:52.205 "base_bdevs_list": [ 00:13:52.205 { 00:13:52.205 "name": "spare", 00:13:52.205 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:52.205 "is_configured": true, 00:13:52.205 "data_offset": 2048, 00:13:52.205 "data_size": 63488 00:13:52.205 }, 00:13:52.205 { 00:13:52.205 "name": "BaseBdev2", 00:13:52.205 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:52.205 "is_configured": true, 00:13:52.205 "data_offset": 2048, 00:13:52.205 "data_size": 63488 00:13:52.205 } 00:13:52.205 ] 00:13:52.205 }' 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.205 [2024-11-26 19:00:43.523047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.205 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.463 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.463 "name": "raid_bdev1", 00:13:52.463 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:52.463 "strip_size_kb": 0, 00:13:52.463 "state": "online", 00:13:52.463 "raid_level": "raid1", 00:13:52.463 "superblock": true, 00:13:52.463 "num_base_bdevs": 2, 00:13:52.463 "num_base_bdevs_discovered": 1, 00:13:52.463 "num_base_bdevs_operational": 1, 00:13:52.463 "base_bdevs_list": [ 00:13:52.463 { 00:13:52.463 "name": null, 00:13:52.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.463 "is_configured": false, 00:13:52.463 "data_offset": 0, 00:13:52.463 "data_size": 63488 00:13:52.463 }, 00:13:52.463 { 00:13:52.463 "name": "BaseBdev2", 00:13:52.463 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:52.463 "is_configured": true, 00:13:52.463 "data_offset": 2048, 00:13:52.463 "data_size": 63488 00:13:52.463 } 00:13:52.463 ] 00:13:52.463 }' 00:13:52.463 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.464 19:00:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.723 19:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:52.723 19:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.723 19:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.723 [2024-11-26 19:00:44.051384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.723 [2024-11-26 19:00:44.051648] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:52.723 [2024-11-26 19:00:44.051674] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:52.723 [2024-11-26 19:00:44.051725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.723 [2024-11-26 19:00:44.068444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:52.723 19:00:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.723 19:00:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:52.723 [2024-11-26 19:00:44.071184] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.100 "name": "raid_bdev1", 00:13:54.100 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:54.100 "strip_size_kb": 0, 00:13:54.100 "state": "online", 00:13:54.100 "raid_level": "raid1", 00:13:54.100 "superblock": true, 00:13:54.100 "num_base_bdevs": 2, 00:13:54.100 "num_base_bdevs_discovered": 2, 00:13:54.100 "num_base_bdevs_operational": 2, 00:13:54.100 "process": { 00:13:54.100 "type": "rebuild", 00:13:54.100 "target": "spare", 00:13:54.100 "progress": { 00:13:54.100 "blocks": 20480, 00:13:54.100 "percent": 32 00:13:54.100 } 00:13:54.100 }, 00:13:54.100 "base_bdevs_list": [ 00:13:54.100 { 00:13:54.100 "name": "spare", 00:13:54.100 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:54.100 "is_configured": true, 00:13:54.100 "data_offset": 2048, 00:13:54.100 "data_size": 63488 00:13:54.100 }, 00:13:54.100 { 00:13:54.100 "name": "BaseBdev2", 00:13:54.100 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:54.100 "is_configured": true, 00:13:54.100 "data_offset": 2048, 00:13:54.100 "data_size": 63488 00:13:54.100 } 00:13:54.100 ] 00:13:54.100 }' 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.100 [2024-11-26 19:00:45.240801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.100 [2024-11-26 19:00:45.280714] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.100 [2024-11-26 19:00:45.280789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.100 [2024-11-26 19:00:45.280812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.100 [2024-11-26 19:00:45.280826] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.100 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.100 "name": "raid_bdev1", 00:13:54.100 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:54.100 "strip_size_kb": 0, 00:13:54.100 "state": "online", 00:13:54.100 "raid_level": "raid1", 00:13:54.101 "superblock": true, 00:13:54.101 "num_base_bdevs": 2, 00:13:54.101 "num_base_bdevs_discovered": 1, 00:13:54.101 "num_base_bdevs_operational": 1, 00:13:54.101 "base_bdevs_list": [ 00:13:54.101 { 00:13:54.101 "name": null, 00:13:54.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.101 "is_configured": false, 00:13:54.101 "data_offset": 0, 00:13:54.101 "data_size": 63488 00:13:54.101 }, 00:13:54.101 { 00:13:54.101 "name": "BaseBdev2", 00:13:54.101 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:54.101 "is_configured": true, 00:13:54.101 "data_offset": 2048, 00:13:54.101 "data_size": 63488 00:13:54.101 } 00:13:54.101 ] 00:13:54.101 }' 00:13:54.101 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.101 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.668 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.668 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.668 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.668 [2024-11-26 19:00:45.836948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.668 [2024-11-26 19:00:45.837184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.668 [2024-11-26 19:00:45.837259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:54.668 [2024-11-26 19:00:45.837524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.668 [2024-11-26 19:00:45.838216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.668 [2024-11-26 19:00:45.838473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.668 [2024-11-26 19:00:45.838617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:54.668 [2024-11-26 19:00:45.838642] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:54.668 [2024-11-26 19:00:45.838657] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:54.668 [2024-11-26 19:00:45.838690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.668 [2024-11-26 19:00:45.855923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:54.668 spare 00:13:54.668 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.668 19:00:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:54.668 [2024-11-26 19:00:45.858596] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.605 "name": "raid_bdev1", 00:13:55.605 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:55.605 "strip_size_kb": 0, 00:13:55.605 "state": "online", 00:13:55.605 "raid_level": "raid1", 00:13:55.605 "superblock": true, 00:13:55.605 "num_base_bdevs": 2, 00:13:55.605 "num_base_bdevs_discovered": 2, 00:13:55.605 "num_base_bdevs_operational": 2, 00:13:55.605 "process": { 00:13:55.605 "type": "rebuild", 00:13:55.605 "target": "spare", 00:13:55.605 "progress": { 00:13:55.605 "blocks": 20480, 00:13:55.605 "percent": 32 00:13:55.605 } 00:13:55.605 }, 00:13:55.605 "base_bdevs_list": [ 00:13:55.605 { 00:13:55.605 "name": "spare", 00:13:55.605 "uuid": "69ced36f-ee05-522e-aeeb-102c7640a1e1", 00:13:55.605 "is_configured": true, 00:13:55.605 "data_offset": 2048, 00:13:55.605 "data_size": 63488 00:13:55.605 }, 00:13:55.605 { 00:13:55.605 "name": "BaseBdev2", 00:13:55.605 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:55.605 "is_configured": true, 00:13:55.605 "data_offset": 2048, 00:13:55.605 "data_size": 63488 00:13:55.605 } 00:13:55.605 ] 00:13:55.605 }' 00:13:55.605 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.865 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.865 19:00:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.865 [2024-11-26 19:00:47.028548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.865 [2024-11-26 19:00:47.067850] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:55.865 [2024-11-26 19:00:47.067996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.865 [2024-11-26 19:00:47.068055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.865 [2024-11-26 19:00:47.068067] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.865 "name": "raid_bdev1", 00:13:55.865 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:55.865 "strip_size_kb": 0, 00:13:55.865 "state": "online", 00:13:55.865 "raid_level": "raid1", 00:13:55.865 "superblock": true, 00:13:55.865 "num_base_bdevs": 2, 00:13:55.865 "num_base_bdevs_discovered": 1, 00:13:55.865 "num_base_bdevs_operational": 1, 00:13:55.865 "base_bdevs_list": [ 00:13:55.865 { 00:13:55.865 "name": null, 00:13:55.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.865 "is_configured": false, 00:13:55.865 "data_offset": 0, 00:13:55.865 "data_size": 63488 00:13:55.865 }, 00:13:55.865 { 00:13:55.865 "name": "BaseBdev2", 00:13:55.865 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:55.865 "is_configured": true, 00:13:55.865 "data_offset": 2048, 00:13:55.865 "data_size": 63488 00:13:55.865 } 00:13:55.865 ] 00:13:55.865 }' 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.865 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.433 "name": "raid_bdev1", 00:13:56.433 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:56.433 "strip_size_kb": 0, 00:13:56.433 "state": "online", 00:13:56.433 "raid_level": "raid1", 00:13:56.433 "superblock": true, 00:13:56.433 "num_base_bdevs": 2, 00:13:56.433 "num_base_bdevs_discovered": 1, 00:13:56.433 "num_base_bdevs_operational": 1, 00:13:56.433 "base_bdevs_list": [ 00:13:56.433 { 00:13:56.433 "name": null, 00:13:56.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.433 "is_configured": false, 00:13:56.433 "data_offset": 0, 00:13:56.433 "data_size": 63488 00:13:56.433 }, 00:13:56.433 { 00:13:56.433 "name": "BaseBdev2", 00:13:56.433 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:56.433 "is_configured": true, 00:13:56.433 "data_offset": 2048, 00:13:56.433 "data_size": 63488 00:13:56.433 } 00:13:56.433 ] 00:13:56.433 }' 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.433 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.692 [2024-11-26 19:00:47.817851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:56.692 [2024-11-26 19:00:47.817949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.692 [2024-11-26 19:00:47.817989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:56.692 [2024-11-26 19:00:47.818007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.692 [2024-11-26 19:00:47.818632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.692 [2024-11-26 19:00:47.818677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:56.692 [2024-11-26 19:00:47.818823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:56.692 [2024-11-26 19:00:47.818845] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:56.692 [2024-11-26 19:00:47.818859] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:56.692 [2024-11-26 19:00:47.818872] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:56.692 BaseBdev1 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.692 19:00:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.626 "name": "raid_bdev1", 00:13:57.626 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:57.626 "strip_size_kb": 0, 00:13:57.626 "state": "online", 00:13:57.626 "raid_level": "raid1", 00:13:57.626 "superblock": true, 00:13:57.626 "num_base_bdevs": 2, 00:13:57.626 "num_base_bdevs_discovered": 1, 00:13:57.626 "num_base_bdevs_operational": 1, 00:13:57.626 "base_bdevs_list": [ 00:13:57.626 { 00:13:57.626 "name": null, 00:13:57.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.626 "is_configured": false, 00:13:57.626 "data_offset": 0, 00:13:57.626 "data_size": 63488 00:13:57.626 }, 00:13:57.626 { 00:13:57.626 "name": "BaseBdev2", 00:13:57.626 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:57.626 "is_configured": true, 00:13:57.626 "data_offset": 2048, 00:13:57.626 "data_size": 63488 00:13:57.626 } 00:13:57.626 ] 00:13:57.626 }' 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.626 19:00:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.194 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.195 "name": "raid_bdev1", 00:13:58.195 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:58.195 "strip_size_kb": 0, 00:13:58.195 "state": "online", 00:13:58.195 "raid_level": "raid1", 00:13:58.195 "superblock": true, 00:13:58.195 "num_base_bdevs": 2, 00:13:58.195 "num_base_bdevs_discovered": 1, 00:13:58.195 "num_base_bdevs_operational": 1, 00:13:58.195 "base_bdevs_list": [ 00:13:58.195 { 00:13:58.195 "name": null, 00:13:58.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.195 "is_configured": false, 00:13:58.195 "data_offset": 0, 00:13:58.195 "data_size": 63488 00:13:58.195 }, 00:13:58.195 { 00:13:58.195 "name": "BaseBdev2", 00:13:58.195 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:58.195 "is_configured": true, 00:13:58.195 "data_offset": 2048, 00:13:58.195 "data_size": 63488 00:13:58.195 } 00:13:58.195 ] 00:13:58.195 }' 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.195 [2024-11-26 19:00:49.542724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.195 [2024-11-26 19:00:49.542953] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:58.195 [2024-11-26 19:00:49.542995] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:58.195 request: 00:13:58.195 { 00:13:58.195 "base_bdev": "BaseBdev1", 00:13:58.195 "raid_bdev": "raid_bdev1", 00:13:58.195 "method": "bdev_raid_add_base_bdev", 00:13:58.195 "req_id": 1 00:13:58.195 } 00:13:58.195 Got JSON-RPC error response 00:13:58.195 response: 00:13:58.195 { 00:13:58.195 "code": -22, 00:13:58.195 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:58.195 } 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.195 19:00:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.571 "name": "raid_bdev1", 00:13:59.571 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:59.571 "strip_size_kb": 0, 00:13:59.571 "state": "online", 00:13:59.571 "raid_level": "raid1", 00:13:59.571 "superblock": true, 00:13:59.571 "num_base_bdevs": 2, 00:13:59.571 "num_base_bdevs_discovered": 1, 00:13:59.571 "num_base_bdevs_operational": 1, 00:13:59.571 "base_bdevs_list": [ 00:13:59.571 { 00:13:59.571 "name": null, 00:13:59.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.571 "is_configured": false, 00:13:59.571 "data_offset": 0, 00:13:59.571 "data_size": 63488 00:13:59.571 }, 00:13:59.571 { 00:13:59.571 "name": "BaseBdev2", 00:13:59.571 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:59.571 "is_configured": true, 00:13:59.571 "data_offset": 2048, 00:13:59.571 "data_size": 63488 00:13:59.571 } 00:13:59.571 ] 00:13:59.571 }' 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.571 19:00:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.830 "name": "raid_bdev1", 00:13:59.830 "uuid": "1aeb4045-16bd-4273-a607-967fca0633b9", 00:13:59.830 "strip_size_kb": 0, 00:13:59.830 "state": "online", 00:13:59.830 "raid_level": "raid1", 00:13:59.830 "superblock": true, 00:13:59.830 "num_base_bdevs": 2, 00:13:59.830 "num_base_bdevs_discovered": 1, 00:13:59.830 "num_base_bdevs_operational": 1, 00:13:59.830 "base_bdevs_list": [ 00:13:59.830 { 00:13:59.830 "name": null, 00:13:59.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.830 "is_configured": false, 00:13:59.830 "data_offset": 0, 00:13:59.830 "data_size": 63488 00:13:59.830 }, 00:13:59.830 { 00:13:59.830 "name": "BaseBdev2", 00:13:59.830 "uuid": "8a34aa73-b716-5f38-8fdd-9060bdce3ee9", 00:13:59.830 "is_configured": true, 00:13:59.830 "data_offset": 2048, 00:13:59.830 "data_size": 63488 00:13:59.830 } 00:13:59.830 ] 00:13:59.830 }' 00:13:59.830 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77126 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77126 ']' 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77126 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77126 00:14:00.089 killing process with pid 77126 00:14:00.089 Received shutdown signal, test time was about 18.298121 seconds 00:14:00.089 00:14:00.089 Latency(us) 00:14:00.089 [2024-11-26T19:00:51.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.089 [2024-11-26T19:00:51.456Z] =================================================================================================================== 00:14:00.089 [2024-11-26T19:00:51.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77126' 00:14:00.089 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77126 00:14:00.090 [2024-11-26 19:00:51.290985] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.090 19:00:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77126 00:14:00.090 [2024-11-26 19:00:51.291165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.090 [2024-11-26 19:00:51.291258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.090 [2024-11-26 19:00:51.291305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:00.348 [2024-11-26 19:00:51.500112] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.285 19:00:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:01.285 00:14:01.285 real 0m21.676s 00:14:01.285 user 0m29.452s 00:14:01.285 sys 0m2.068s 00:14:01.285 19:00:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.285 19:00:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.285 ************************************ 00:14:01.285 END TEST raid_rebuild_test_sb_io 00:14:01.285 ************************************ 00:14:01.285 19:00:52 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:01.285 19:00:52 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:01.285 19:00:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:01.285 19:00:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.285 19:00:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.543 ************************************ 00:14:01.543 START TEST raid_rebuild_test 00:14:01.543 ************************************ 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77832 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77832 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77832 ']' 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.543 19:00:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.543 [2024-11-26 19:00:52.772963] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:14:01.543 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:01.543 Zero copy mechanism will not be used. 00:14:01.543 [2024-11-26 19:00:52.773151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77832 ] 00:14:01.801 [2024-11-26 19:00:52.955328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.801 [2024-11-26 19:00:53.082607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.059 [2024-11-26 19:00:53.288583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.059 [2024-11-26 19:00:53.288624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 BaseBdev1_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 [2024-11-26 19:00:53.755443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:02.624 [2024-11-26 19:00:53.755548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.624 [2024-11-26 19:00:53.755579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:02.624 [2024-11-26 19:00:53.755598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.624 [2024-11-26 19:00:53.758488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.624 [2024-11-26 19:00:53.758539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:02.624 BaseBdev1 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 BaseBdev2_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 [2024-11-26 19:00:53.807969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:02.624 [2024-11-26 19:00:53.808060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.624 [2024-11-26 19:00:53.808107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:02.624 [2024-11-26 19:00:53.808125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.624 [2024-11-26 19:00:53.810956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.624 [2024-11-26 19:00:53.810998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:02.624 BaseBdev2 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 BaseBdev3_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 [2024-11-26 19:00:53.868406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:02.624 [2024-11-26 19:00:53.868499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.624 [2024-11-26 19:00:53.868531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:02.624 [2024-11-26 19:00:53.868549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.624 [2024-11-26 19:00:53.871376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.624 [2024-11-26 19:00:53.871438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:02.624 BaseBdev3 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 BaseBdev4_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 [2024-11-26 19:00:53.921266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:02.624 [2024-11-26 19:00:53.921371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.624 [2024-11-26 19:00:53.921401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:02.624 [2024-11-26 19:00:53.921420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.624 [2024-11-26 19:00:53.924410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.624 [2024-11-26 19:00:53.924467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:02.624 BaseBdev4 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 spare_malloc 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 spare_delay 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 [2024-11-26 19:00:53.979486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.624 [2024-11-26 19:00:53.979553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.624 [2024-11-26 19:00:53.979579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:02.624 [2024-11-26 19:00:53.979597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.624 [2024-11-26 19:00:53.982621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.624 [2024-11-26 19:00:53.982714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.624 spare 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.624 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.624 [2024-11-26 19:00:53.987590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.881 [2024-11-26 19:00:53.990328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.881 [2024-11-26 19:00:53.990434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.881 [2024-11-26 19:00:53.990546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:02.881 [2024-11-26 19:00:53.990658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:02.881 [2024-11-26 19:00:53.990682] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:02.881 [2024-11-26 19:00:53.991104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:02.881 [2024-11-26 19:00:53.991326] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:02.881 [2024-11-26 19:00:53.991346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:02.881 [2024-11-26 19:00:53.991593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.881 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.881 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:02.881 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.881 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.882 19:00:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.882 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.882 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.882 "name": "raid_bdev1", 00:14:02.882 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:02.882 "strip_size_kb": 0, 00:14:02.882 "state": "online", 00:14:02.882 "raid_level": "raid1", 00:14:02.882 "superblock": false, 00:14:02.882 "num_base_bdevs": 4, 00:14:02.882 "num_base_bdevs_discovered": 4, 00:14:02.882 "num_base_bdevs_operational": 4, 00:14:02.882 "base_bdevs_list": [ 00:14:02.882 { 00:14:02.882 "name": "BaseBdev1", 00:14:02.882 "uuid": "73915780-a92c-5190-bc41-626768760e57", 00:14:02.882 "is_configured": true, 00:14:02.882 "data_offset": 0, 00:14:02.882 "data_size": 65536 00:14:02.882 }, 00:14:02.882 { 00:14:02.882 "name": "BaseBdev2", 00:14:02.882 "uuid": "ca0104fd-20f7-5d38-9d39-ffe4dc7b5fef", 00:14:02.882 "is_configured": true, 00:14:02.882 "data_offset": 0, 00:14:02.882 "data_size": 65536 00:14:02.882 }, 00:14:02.882 { 00:14:02.882 "name": "BaseBdev3", 00:14:02.882 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:02.882 "is_configured": true, 00:14:02.882 "data_offset": 0, 00:14:02.882 "data_size": 65536 00:14:02.882 }, 00:14:02.882 { 00:14:02.882 "name": "BaseBdev4", 00:14:02.882 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:02.882 "is_configured": true, 00:14:02.882 "data_offset": 0, 00:14:02.882 "data_size": 65536 00:14:02.882 } 00:14:02.882 ] 00:14:02.882 }' 00:14:02.882 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.882 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.139 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.139 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:03.139 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.139 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.139 [2024-11-26 19:00:54.504327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.403 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:03.695 [2024-11-26 19:00:54.840009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:03.695 /dev/nbd0 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.695 1+0 records in 00:14:03.695 1+0 records out 00:14:03.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251293 s, 16.3 MB/s 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:03.695 19:00:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:13.664 65536+0 records in 00:14:13.664 65536+0 records out 00:14:13.664 33554432 bytes (34 MB, 32 MiB) copied, 8.31559 s, 4.0 MB/s 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.664 [2024-11-26 19:01:03.562578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.664 [2024-11-26 19:01:03.577078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.664 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.664 "name": "raid_bdev1", 00:14:13.664 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:13.664 "strip_size_kb": 0, 00:14:13.664 "state": "online", 00:14:13.665 "raid_level": "raid1", 00:14:13.665 "superblock": false, 00:14:13.665 "num_base_bdevs": 4, 00:14:13.665 "num_base_bdevs_discovered": 3, 00:14:13.665 "num_base_bdevs_operational": 3, 00:14:13.665 "base_bdevs_list": [ 00:14:13.665 { 00:14:13.665 "name": null, 00:14:13.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.665 "is_configured": false, 00:14:13.665 "data_offset": 0, 00:14:13.665 "data_size": 65536 00:14:13.665 }, 00:14:13.665 { 00:14:13.665 "name": "BaseBdev2", 00:14:13.665 "uuid": "ca0104fd-20f7-5d38-9d39-ffe4dc7b5fef", 00:14:13.665 "is_configured": true, 00:14:13.665 "data_offset": 0, 00:14:13.665 "data_size": 65536 00:14:13.665 }, 00:14:13.665 { 00:14:13.665 "name": "BaseBdev3", 00:14:13.665 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:13.665 "is_configured": true, 00:14:13.665 "data_offset": 0, 00:14:13.665 "data_size": 65536 00:14:13.665 }, 00:14:13.665 { 00:14:13.665 "name": "BaseBdev4", 00:14:13.665 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:13.665 "is_configured": true, 00:14:13.665 "data_offset": 0, 00:14:13.665 "data_size": 65536 00:14:13.665 } 00:14:13.665 ] 00:14:13.665 }' 00:14:13.665 19:01:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.665 19:01:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.665 19:01:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.665 19:01:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.665 19:01:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.665 [2024-11-26 19:01:04.121306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.665 [2024-11-26 19:01:04.135160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:13.665 19:01:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.665 19:01:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:13.665 [2024-11-26 19:01:04.137741] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.924 "name": "raid_bdev1", 00:14:13.924 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:13.924 "strip_size_kb": 0, 00:14:13.924 "state": "online", 00:14:13.924 "raid_level": "raid1", 00:14:13.924 "superblock": false, 00:14:13.924 "num_base_bdevs": 4, 00:14:13.924 "num_base_bdevs_discovered": 4, 00:14:13.924 "num_base_bdevs_operational": 4, 00:14:13.924 "process": { 00:14:13.924 "type": "rebuild", 00:14:13.924 "target": "spare", 00:14:13.924 "progress": { 00:14:13.924 "blocks": 20480, 00:14:13.924 "percent": 31 00:14:13.924 } 00:14:13.924 }, 00:14:13.924 "base_bdevs_list": [ 00:14:13.924 { 00:14:13.924 "name": "spare", 00:14:13.924 "uuid": "b5707f20-a148-5830-98ea-68649d253ec6", 00:14:13.924 "is_configured": true, 00:14:13.924 "data_offset": 0, 00:14:13.924 "data_size": 65536 00:14:13.924 }, 00:14:13.924 { 00:14:13.924 "name": "BaseBdev2", 00:14:13.924 "uuid": "ca0104fd-20f7-5d38-9d39-ffe4dc7b5fef", 00:14:13.924 "is_configured": true, 00:14:13.924 "data_offset": 0, 00:14:13.924 "data_size": 65536 00:14:13.924 }, 00:14:13.924 { 00:14:13.924 "name": "BaseBdev3", 00:14:13.924 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:13.924 "is_configured": true, 00:14:13.924 "data_offset": 0, 00:14:13.924 "data_size": 65536 00:14:13.924 }, 00:14:13.924 { 00:14:13.924 "name": "BaseBdev4", 00:14:13.924 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:13.924 "is_configured": true, 00:14:13.924 "data_offset": 0, 00:14:13.924 "data_size": 65536 00:14:13.924 } 00:14:13.924 ] 00:14:13.924 }' 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.924 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.183 [2024-11-26 19:01:05.302707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.183 [2024-11-26 19:01:05.346436] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.183 [2024-11-26 19:01:05.346514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.183 [2024-11-26 19:01:05.346538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.183 [2024-11-26 19:01:05.346553] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.183 "name": "raid_bdev1", 00:14:14.183 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:14.183 "strip_size_kb": 0, 00:14:14.183 "state": "online", 00:14:14.183 "raid_level": "raid1", 00:14:14.183 "superblock": false, 00:14:14.183 "num_base_bdevs": 4, 00:14:14.183 "num_base_bdevs_discovered": 3, 00:14:14.183 "num_base_bdevs_operational": 3, 00:14:14.183 "base_bdevs_list": [ 00:14:14.183 { 00:14:14.183 "name": null, 00:14:14.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.183 "is_configured": false, 00:14:14.183 "data_offset": 0, 00:14:14.183 "data_size": 65536 00:14:14.183 }, 00:14:14.183 { 00:14:14.183 "name": "BaseBdev2", 00:14:14.183 "uuid": "ca0104fd-20f7-5d38-9d39-ffe4dc7b5fef", 00:14:14.183 "is_configured": true, 00:14:14.183 "data_offset": 0, 00:14:14.183 "data_size": 65536 00:14:14.183 }, 00:14:14.183 { 00:14:14.183 "name": "BaseBdev3", 00:14:14.183 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:14.183 "is_configured": true, 00:14:14.183 "data_offset": 0, 00:14:14.183 "data_size": 65536 00:14:14.183 }, 00:14:14.183 { 00:14:14.183 "name": "BaseBdev4", 00:14:14.183 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:14.183 "is_configured": true, 00:14:14.183 "data_offset": 0, 00:14:14.183 "data_size": 65536 00:14:14.183 } 00:14:14.183 ] 00:14:14.183 }' 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.183 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.751 "name": "raid_bdev1", 00:14:14.751 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:14.751 "strip_size_kb": 0, 00:14:14.751 "state": "online", 00:14:14.751 "raid_level": "raid1", 00:14:14.751 "superblock": false, 00:14:14.751 "num_base_bdevs": 4, 00:14:14.751 "num_base_bdevs_discovered": 3, 00:14:14.751 "num_base_bdevs_operational": 3, 00:14:14.751 "base_bdevs_list": [ 00:14:14.751 { 00:14:14.751 "name": null, 00:14:14.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.751 "is_configured": false, 00:14:14.751 "data_offset": 0, 00:14:14.751 "data_size": 65536 00:14:14.751 }, 00:14:14.751 { 00:14:14.751 "name": "BaseBdev2", 00:14:14.751 "uuid": "ca0104fd-20f7-5d38-9d39-ffe4dc7b5fef", 00:14:14.751 "is_configured": true, 00:14:14.751 "data_offset": 0, 00:14:14.751 "data_size": 65536 00:14:14.751 }, 00:14:14.751 { 00:14:14.751 "name": "BaseBdev3", 00:14:14.751 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:14.751 "is_configured": true, 00:14:14.751 "data_offset": 0, 00:14:14.751 "data_size": 65536 00:14:14.751 }, 00:14:14.751 { 00:14:14.751 "name": "BaseBdev4", 00:14:14.751 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:14.751 "is_configured": true, 00:14:14.751 "data_offset": 0, 00:14:14.751 "data_size": 65536 00:14:14.751 } 00:14:14.751 ] 00:14:14.751 }' 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.751 19:01:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.751 19:01:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.751 19:01:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.751 19:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.751 19:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.751 [2024-11-26 19:01:06.044571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.751 [2024-11-26 19:01:06.058881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:14.751 19:01:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.751 19:01:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:14.751 [2024-11-26 19:01:06.061610] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.129 "name": "raid_bdev1", 00:14:16.129 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:16.129 "strip_size_kb": 0, 00:14:16.129 "state": "online", 00:14:16.129 "raid_level": "raid1", 00:14:16.129 "superblock": false, 00:14:16.129 "num_base_bdevs": 4, 00:14:16.129 "num_base_bdevs_discovered": 4, 00:14:16.129 "num_base_bdevs_operational": 4, 00:14:16.129 "process": { 00:14:16.129 "type": "rebuild", 00:14:16.129 "target": "spare", 00:14:16.129 "progress": { 00:14:16.129 "blocks": 20480, 00:14:16.129 "percent": 31 00:14:16.129 } 00:14:16.129 }, 00:14:16.129 "base_bdevs_list": [ 00:14:16.129 { 00:14:16.129 "name": "spare", 00:14:16.129 "uuid": "b5707f20-a148-5830-98ea-68649d253ec6", 00:14:16.129 "is_configured": true, 00:14:16.129 "data_offset": 0, 00:14:16.129 "data_size": 65536 00:14:16.129 }, 00:14:16.129 { 00:14:16.129 "name": "BaseBdev2", 00:14:16.129 "uuid": "ca0104fd-20f7-5d38-9d39-ffe4dc7b5fef", 00:14:16.129 "is_configured": true, 00:14:16.129 "data_offset": 0, 00:14:16.129 "data_size": 65536 00:14:16.129 }, 00:14:16.129 { 00:14:16.129 "name": "BaseBdev3", 00:14:16.129 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:16.129 "is_configured": true, 00:14:16.129 "data_offset": 0, 00:14:16.129 "data_size": 65536 00:14:16.129 }, 00:14:16.129 { 00:14:16.129 "name": "BaseBdev4", 00:14:16.129 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:16.129 "is_configured": true, 00:14:16.129 "data_offset": 0, 00:14:16.129 "data_size": 65536 00:14:16.129 } 00:14:16.129 ] 00:14:16.129 }' 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.129 [2024-11-26 19:01:07.238706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.129 [2024-11-26 19:01:07.270488] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.129 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.130 "name": "raid_bdev1", 00:14:16.130 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:16.130 "strip_size_kb": 0, 00:14:16.130 "state": "online", 00:14:16.130 "raid_level": "raid1", 00:14:16.130 "superblock": false, 00:14:16.130 "num_base_bdevs": 4, 00:14:16.130 "num_base_bdevs_discovered": 3, 00:14:16.130 "num_base_bdevs_operational": 3, 00:14:16.130 "process": { 00:14:16.130 "type": "rebuild", 00:14:16.130 "target": "spare", 00:14:16.130 "progress": { 00:14:16.130 "blocks": 24576, 00:14:16.130 "percent": 37 00:14:16.130 } 00:14:16.130 }, 00:14:16.130 "base_bdevs_list": [ 00:14:16.130 { 00:14:16.130 "name": "spare", 00:14:16.130 "uuid": "b5707f20-a148-5830-98ea-68649d253ec6", 00:14:16.130 "is_configured": true, 00:14:16.130 "data_offset": 0, 00:14:16.130 "data_size": 65536 00:14:16.130 }, 00:14:16.130 { 00:14:16.130 "name": null, 00:14:16.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.130 "is_configured": false, 00:14:16.130 "data_offset": 0, 00:14:16.130 "data_size": 65536 00:14:16.130 }, 00:14:16.130 { 00:14:16.130 "name": "BaseBdev3", 00:14:16.130 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:16.130 "is_configured": true, 00:14:16.130 "data_offset": 0, 00:14:16.130 "data_size": 65536 00:14:16.130 }, 00:14:16.130 { 00:14:16.130 "name": "BaseBdev4", 00:14:16.130 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:16.130 "is_configured": true, 00:14:16.130 "data_offset": 0, 00:14:16.130 "data_size": 65536 00:14:16.130 } 00:14:16.130 ] 00:14:16.130 }' 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.130 "name": "raid_bdev1", 00:14:16.130 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:16.130 "strip_size_kb": 0, 00:14:16.130 "state": "online", 00:14:16.130 "raid_level": "raid1", 00:14:16.130 "superblock": false, 00:14:16.130 "num_base_bdevs": 4, 00:14:16.130 "num_base_bdevs_discovered": 3, 00:14:16.130 "num_base_bdevs_operational": 3, 00:14:16.130 "process": { 00:14:16.130 "type": "rebuild", 00:14:16.130 "target": "spare", 00:14:16.130 "progress": { 00:14:16.130 "blocks": 26624, 00:14:16.130 "percent": 40 00:14:16.130 } 00:14:16.130 }, 00:14:16.130 "base_bdevs_list": [ 00:14:16.130 { 00:14:16.130 "name": "spare", 00:14:16.130 "uuid": "b5707f20-a148-5830-98ea-68649d253ec6", 00:14:16.130 "is_configured": true, 00:14:16.130 "data_offset": 0, 00:14:16.130 "data_size": 65536 00:14:16.130 }, 00:14:16.130 { 00:14:16.130 "name": null, 00:14:16.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.130 "is_configured": false, 00:14:16.130 "data_offset": 0, 00:14:16.130 "data_size": 65536 00:14:16.130 }, 00:14:16.130 { 00:14:16.130 "name": "BaseBdev3", 00:14:16.130 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:16.130 "is_configured": true, 00:14:16.130 "data_offset": 0, 00:14:16.130 "data_size": 65536 00:14:16.130 }, 00:14:16.130 { 00:14:16.130 "name": "BaseBdev4", 00:14:16.130 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:16.130 "is_configured": true, 00:14:16.130 "data_offset": 0, 00:14:16.130 "data_size": 65536 00:14:16.130 } 00:14:16.130 ] 00:14:16.130 }' 00:14:16.130 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.388 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.388 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.388 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.388 19:01:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.323 "name": "raid_bdev1", 00:14:17.323 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:17.323 "strip_size_kb": 0, 00:14:17.323 "state": "online", 00:14:17.323 "raid_level": "raid1", 00:14:17.323 "superblock": false, 00:14:17.323 "num_base_bdevs": 4, 00:14:17.323 "num_base_bdevs_discovered": 3, 00:14:17.323 "num_base_bdevs_operational": 3, 00:14:17.323 "process": { 00:14:17.323 "type": "rebuild", 00:14:17.323 "target": "spare", 00:14:17.323 "progress": { 00:14:17.323 "blocks": 51200, 00:14:17.323 "percent": 78 00:14:17.323 } 00:14:17.323 }, 00:14:17.323 "base_bdevs_list": [ 00:14:17.323 { 00:14:17.323 "name": "spare", 00:14:17.323 "uuid": "b5707f20-a148-5830-98ea-68649d253ec6", 00:14:17.323 "is_configured": true, 00:14:17.323 "data_offset": 0, 00:14:17.323 "data_size": 65536 00:14:17.323 }, 00:14:17.323 { 00:14:17.323 "name": null, 00:14:17.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.323 "is_configured": false, 00:14:17.323 "data_offset": 0, 00:14:17.323 "data_size": 65536 00:14:17.323 }, 00:14:17.323 { 00:14:17.323 "name": "BaseBdev3", 00:14:17.323 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:17.323 "is_configured": true, 00:14:17.323 "data_offset": 0, 00:14:17.323 "data_size": 65536 00:14:17.323 }, 00:14:17.323 { 00:14:17.323 "name": "BaseBdev4", 00:14:17.323 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:17.323 "is_configured": true, 00:14:17.323 "data_offset": 0, 00:14:17.323 "data_size": 65536 00:14:17.323 } 00:14:17.323 ] 00:14:17.323 }' 00:14:17.323 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.582 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.582 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.582 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.582 19:01:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.148 [2024-11-26 19:01:09.285058] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:18.149 [2024-11-26 19:01:09.285188] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:18.149 [2024-11-26 19:01:09.285284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.744 "name": "raid_bdev1", 00:14:18.744 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:18.744 "strip_size_kb": 0, 00:14:18.744 "state": "online", 00:14:18.744 "raid_level": "raid1", 00:14:18.744 "superblock": false, 00:14:18.744 "num_base_bdevs": 4, 00:14:18.744 "num_base_bdevs_discovered": 3, 00:14:18.744 "num_base_bdevs_operational": 3, 00:14:18.744 "base_bdevs_list": [ 00:14:18.744 { 00:14:18.744 "name": "spare", 00:14:18.744 "uuid": "b5707f20-a148-5830-98ea-68649d253ec6", 00:14:18.744 "is_configured": true, 00:14:18.744 "data_offset": 0, 00:14:18.744 "data_size": 65536 00:14:18.744 }, 00:14:18.744 { 00:14:18.744 "name": null, 00:14:18.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.744 "is_configured": false, 00:14:18.744 "data_offset": 0, 00:14:18.744 "data_size": 65536 00:14:18.744 }, 00:14:18.744 { 00:14:18.744 "name": "BaseBdev3", 00:14:18.744 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:18.744 "is_configured": true, 00:14:18.744 "data_offset": 0, 00:14:18.744 "data_size": 65536 00:14:18.744 }, 00:14:18.744 { 00:14:18.744 "name": "BaseBdev4", 00:14:18.744 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:18.744 "is_configured": true, 00:14:18.744 "data_offset": 0, 00:14:18.744 "data_size": 65536 00:14:18.744 } 00:14:18.744 ] 00:14:18.744 }' 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.744 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.744 "name": "raid_bdev1", 00:14:18.744 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:18.744 "strip_size_kb": 0, 00:14:18.744 "state": "online", 00:14:18.744 "raid_level": "raid1", 00:14:18.744 "superblock": false, 00:14:18.744 "num_base_bdevs": 4, 00:14:18.744 "num_base_bdevs_discovered": 3, 00:14:18.745 "num_base_bdevs_operational": 3, 00:14:18.745 "base_bdevs_list": [ 00:14:18.745 { 00:14:18.745 "name": "spare", 00:14:18.745 "uuid": "b5707f20-a148-5830-98ea-68649d253ec6", 00:14:18.745 "is_configured": true, 00:14:18.745 "data_offset": 0, 00:14:18.745 "data_size": 65536 00:14:18.745 }, 00:14:18.745 { 00:14:18.745 "name": null, 00:14:18.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.745 "is_configured": false, 00:14:18.745 "data_offset": 0, 00:14:18.745 "data_size": 65536 00:14:18.745 }, 00:14:18.745 { 00:14:18.745 "name": "BaseBdev3", 00:14:18.745 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:18.745 "is_configured": true, 00:14:18.745 "data_offset": 0, 00:14:18.745 "data_size": 65536 00:14:18.745 }, 00:14:18.745 { 00:14:18.745 "name": "BaseBdev4", 00:14:18.745 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:18.745 "is_configured": true, 00:14:18.745 "data_offset": 0, 00:14:18.745 "data_size": 65536 00:14:18.745 } 00:14:18.745 ] 00:14:18.745 }' 00:14:18.745 19:01:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.745 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.004 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.004 "name": "raid_bdev1", 00:14:19.004 "uuid": "c1ca3d1c-0908-44b2-ba51-1f9911debcf8", 00:14:19.004 "strip_size_kb": 0, 00:14:19.004 "state": "online", 00:14:19.004 "raid_level": "raid1", 00:14:19.004 "superblock": false, 00:14:19.004 "num_base_bdevs": 4, 00:14:19.004 "num_base_bdevs_discovered": 3, 00:14:19.004 "num_base_bdevs_operational": 3, 00:14:19.004 "base_bdevs_list": [ 00:14:19.004 { 00:14:19.004 "name": "spare", 00:14:19.004 "uuid": "b5707f20-a148-5830-98ea-68649d253ec6", 00:14:19.004 "is_configured": true, 00:14:19.004 "data_offset": 0, 00:14:19.004 "data_size": 65536 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "name": null, 00:14:19.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.004 "is_configured": false, 00:14:19.004 "data_offset": 0, 00:14:19.004 "data_size": 65536 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "name": "BaseBdev3", 00:14:19.004 "uuid": "5f4d091c-7ef8-5f03-9e82-8b5cd3a67d83", 00:14:19.004 "is_configured": true, 00:14:19.004 "data_offset": 0, 00:14:19.004 "data_size": 65536 00:14:19.004 }, 00:14:19.004 { 00:14:19.004 "name": "BaseBdev4", 00:14:19.004 "uuid": "43e550f2-00b2-55df-9834-058057e61aac", 00:14:19.004 "is_configured": true, 00:14:19.004 "data_offset": 0, 00:14:19.004 "data_size": 65536 00:14:19.004 } 00:14:19.004 ] 00:14:19.004 }' 00:14:19.004 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.004 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.572 [2024-11-26 19:01:10.636059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.572 [2024-11-26 19:01:10.636248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.572 [2024-11-26 19:01:10.636373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.572 [2024-11-26 19:01:10.636488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.572 [2024-11-26 19:01:10.636505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.572 19:01:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:19.831 /dev/nbd0 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.831 1+0 records in 00:14:19.831 1+0 records out 00:14:19.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035183 s, 11.6 MB/s 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.831 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:20.090 /dev/nbd1 00:14:20.090 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:20.090 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:20.090 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:20.090 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:20.090 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.090 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.090 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:20.090 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.091 1+0 records in 00:14:20.091 1+0 records out 00:14:20.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394463 s, 10.4 MB/s 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:20.091 19:01:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:20.350 19:01:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:20.350 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.350 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:20.350 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.350 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:20.350 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.350 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.609 19:01:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:20.868 19:01:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:20.868 19:01:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:20.868 19:01:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:20.868 19:01:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.868 19:01:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.868 19:01:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:20.868 19:01:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:20.868 19:01:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77832 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77832 ']' 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77832 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77832 00:14:20.869 killing process with pid 77832 00:14:20.869 Received shutdown signal, test time was about 60.000000 seconds 00:14:20.869 00:14:20.869 Latency(us) 00:14:20.869 [2024-11-26T19:01:12.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.869 [2024-11-26T19:01:12.236Z] =================================================================================================================== 00:14:20.869 [2024-11-26T19:01:12.236Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77832' 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77832 00:14:20.869 19:01:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77832 00:14:20.869 [2024-11-26 19:01:12.171341] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.438 [2024-11-26 19:01:12.579557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:22.374 00:14:22.374 real 0m20.949s 00:14:22.374 user 0m23.398s 00:14:22.374 sys 0m3.639s 00:14:22.374 ************************************ 00:14:22.374 END TEST raid_rebuild_test 00:14:22.374 ************************************ 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.374 19:01:13 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:22.374 19:01:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:22.374 19:01:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.374 19:01:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:22.374 ************************************ 00:14:22.374 START TEST raid_rebuild_test_sb 00:14:22.374 ************************************ 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:22.374 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78306 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78306 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78306 ']' 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.375 19:01:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.634 [2024-11-26 19:01:13.781482] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:14:22.634 [2024-11-26 19:01:13.781883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:22.634 Zero copy mechanism will not be used. 00:14:22.634 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78306 ] 00:14:22.634 [2024-11-26 19:01:13.966799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.892 [2024-11-26 19:01:14.096224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.152 [2024-11-26 19:01:14.281243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.152 [2024-11-26 19:01:14.281306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.412 BaseBdev1_malloc 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.412 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.412 [2024-11-26 19:01:14.772932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:23.412 [2024-11-26 19:01:14.773231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.412 [2024-11-26 19:01:14.773273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:23.412 [2024-11-26 19:01:14.773296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.412 [2024-11-26 19:01:14.776311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.671 [2024-11-26 19:01:14.776525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:23.671 BaseBdev1 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.671 BaseBdev2_malloc 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.671 [2024-11-26 19:01:14.821618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:23.671 [2024-11-26 19:01:14.821701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.671 [2024-11-26 19:01:14.821732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:23.671 [2024-11-26 19:01:14.821749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.671 [2024-11-26 19:01:14.824500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.671 [2024-11-26 19:01:14.824722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:23.671 BaseBdev2 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.671 BaseBdev3_malloc 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.671 [2024-11-26 19:01:14.885710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:23.671 [2024-11-26 19:01:14.885808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.671 [2024-11-26 19:01:14.885841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:23.671 [2024-11-26 19:01:14.885861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.671 [2024-11-26 19:01:14.888611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.671 [2024-11-26 19:01:14.888792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:23.671 BaseBdev3 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.671 BaseBdev4_malloc 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.671 [2024-11-26 19:01:14.934412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:23.671 [2024-11-26 19:01:14.934519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.671 [2024-11-26 19:01:14.934549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:23.671 [2024-11-26 19:01:14.934567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.671 [2024-11-26 19:01:14.937420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.671 [2024-11-26 19:01:14.937483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:23.671 BaseBdev4 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.671 spare_malloc 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.671 spare_delay 00:14:23.671 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.672 19:01:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:23.672 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.672 19:01:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.672 [2024-11-26 19:01:14.998092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:23.672 [2024-11-26 19:01:14.998159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.672 [2024-11-26 19:01:14.998189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:23.672 [2024-11-26 19:01:14.998208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.672 [2024-11-26 19:01:15.001050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.672 [2024-11-26 19:01:15.001107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:23.672 spare 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.672 [2024-11-26 19:01:15.006139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.672 [2024-11-26 19:01:15.008630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.672 [2024-11-26 19:01:15.008867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.672 [2024-11-26 19:01:15.009001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:23.672 [2024-11-26 19:01:15.009277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:23.672 [2024-11-26 19:01:15.009301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:23.672 [2024-11-26 19:01:15.009617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:23.672 [2024-11-26 19:01:15.009817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:23.672 [2024-11-26 19:01:15.009832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:23.672 [2024-11-26 19:01:15.010105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.672 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.931 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.931 "name": "raid_bdev1", 00:14:23.931 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:23.931 "strip_size_kb": 0, 00:14:23.931 "state": "online", 00:14:23.931 "raid_level": "raid1", 00:14:23.931 "superblock": true, 00:14:23.931 "num_base_bdevs": 4, 00:14:23.931 "num_base_bdevs_discovered": 4, 00:14:23.931 "num_base_bdevs_operational": 4, 00:14:23.931 "base_bdevs_list": [ 00:14:23.931 { 00:14:23.931 "name": "BaseBdev1", 00:14:23.931 "uuid": "a8221234-7e2c-508c-a8f5-362f7090c52a", 00:14:23.931 "is_configured": true, 00:14:23.931 "data_offset": 2048, 00:14:23.931 "data_size": 63488 00:14:23.931 }, 00:14:23.931 { 00:14:23.931 "name": "BaseBdev2", 00:14:23.931 "uuid": "767ab23b-b283-52fa-8e53-8b857ef98791", 00:14:23.931 "is_configured": true, 00:14:23.931 "data_offset": 2048, 00:14:23.931 "data_size": 63488 00:14:23.931 }, 00:14:23.931 { 00:14:23.931 "name": "BaseBdev3", 00:14:23.931 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:23.931 "is_configured": true, 00:14:23.931 "data_offset": 2048, 00:14:23.931 "data_size": 63488 00:14:23.931 }, 00:14:23.931 { 00:14:23.931 "name": "BaseBdev4", 00:14:23.931 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:23.931 "is_configured": true, 00:14:23.931 "data_offset": 2048, 00:14:23.931 "data_size": 63488 00:14:23.931 } 00:14:23.931 ] 00:14:23.931 }' 00:14:23.931 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.931 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.191 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.191 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:24.191 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.191 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.191 [2024-11-26 19:01:15.538778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:24.464 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:24.723 [2024-11-26 19:01:15.914535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:24.723 /dev/nbd0 00:14:24.723 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.723 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.723 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:24.723 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:24.723 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:24.723 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:24.723 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.724 1+0 records in 00:14:24.724 1+0 records out 00:14:24.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474551 s, 8.6 MB/s 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:24.724 19:01:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:32.846 63488+0 records in 00:14:32.846 63488+0 records out 00:14:32.846 32505856 bytes (33 MB, 31 MiB) copied, 7.77524 s, 4.2 MB/s 00:14:32.846 19:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:32.846 19:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.846 19:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:32.846 19:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.846 19:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:32.846 19:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.846 19:01:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.846 [2024-11-26 19:01:24.023515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.846 [2024-11-26 19:01:24.051562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.846 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.847 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.847 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.847 "name": "raid_bdev1", 00:14:32.847 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:32.847 "strip_size_kb": 0, 00:14:32.847 "state": "online", 00:14:32.847 "raid_level": "raid1", 00:14:32.847 "superblock": true, 00:14:32.847 "num_base_bdevs": 4, 00:14:32.847 "num_base_bdevs_discovered": 3, 00:14:32.847 "num_base_bdevs_operational": 3, 00:14:32.847 "base_bdevs_list": [ 00:14:32.847 { 00:14:32.847 "name": null, 00:14:32.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.847 "is_configured": false, 00:14:32.847 "data_offset": 0, 00:14:32.847 "data_size": 63488 00:14:32.847 }, 00:14:32.847 { 00:14:32.847 "name": "BaseBdev2", 00:14:32.847 "uuid": "767ab23b-b283-52fa-8e53-8b857ef98791", 00:14:32.847 "is_configured": true, 00:14:32.847 "data_offset": 2048, 00:14:32.847 "data_size": 63488 00:14:32.847 }, 00:14:32.847 { 00:14:32.847 "name": "BaseBdev3", 00:14:32.847 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:32.847 "is_configured": true, 00:14:32.847 "data_offset": 2048, 00:14:32.847 "data_size": 63488 00:14:32.847 }, 00:14:32.847 { 00:14:32.847 "name": "BaseBdev4", 00:14:32.847 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:32.847 "is_configured": true, 00:14:32.847 "data_offset": 2048, 00:14:32.847 "data_size": 63488 00:14:32.847 } 00:14:32.847 ] 00:14:32.847 }' 00:14:32.847 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.847 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.422 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:33.422 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.422 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.422 [2024-11-26 19:01:24.563749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.422 [2024-11-26 19:01:24.578351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:33.422 19:01:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.422 19:01:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:33.422 [2024-11-26 19:01:24.581196] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.360 "name": "raid_bdev1", 00:14:34.360 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:34.360 "strip_size_kb": 0, 00:14:34.360 "state": "online", 00:14:34.360 "raid_level": "raid1", 00:14:34.360 "superblock": true, 00:14:34.360 "num_base_bdevs": 4, 00:14:34.360 "num_base_bdevs_discovered": 4, 00:14:34.360 "num_base_bdevs_operational": 4, 00:14:34.360 "process": { 00:14:34.360 "type": "rebuild", 00:14:34.360 "target": "spare", 00:14:34.360 "progress": { 00:14:34.360 "blocks": 20480, 00:14:34.360 "percent": 32 00:14:34.360 } 00:14:34.360 }, 00:14:34.360 "base_bdevs_list": [ 00:14:34.360 { 00:14:34.360 "name": "spare", 00:14:34.360 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:34.360 "is_configured": true, 00:14:34.360 "data_offset": 2048, 00:14:34.360 "data_size": 63488 00:14:34.360 }, 00:14:34.360 { 00:14:34.360 "name": "BaseBdev2", 00:14:34.360 "uuid": "767ab23b-b283-52fa-8e53-8b857ef98791", 00:14:34.360 "is_configured": true, 00:14:34.360 "data_offset": 2048, 00:14:34.360 "data_size": 63488 00:14:34.360 }, 00:14:34.360 { 00:14:34.360 "name": "BaseBdev3", 00:14:34.360 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:34.360 "is_configured": true, 00:14:34.360 "data_offset": 2048, 00:14:34.360 "data_size": 63488 00:14:34.360 }, 00:14:34.360 { 00:14:34.360 "name": "BaseBdev4", 00:14:34.360 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:34.360 "is_configured": true, 00:14:34.360 "data_offset": 2048, 00:14:34.360 "data_size": 63488 00:14:34.360 } 00:14:34.360 ] 00:14:34.360 }' 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.360 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.619 [2024-11-26 19:01:25.751457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.619 [2024-11-26 19:01:25.790205] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:34.619 [2024-11-26 19:01:25.790330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.619 [2024-11-26 19:01:25.790357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.619 [2024-11-26 19:01:25.790371] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.619 "name": "raid_bdev1", 00:14:34.619 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:34.619 "strip_size_kb": 0, 00:14:34.619 "state": "online", 00:14:34.619 "raid_level": "raid1", 00:14:34.619 "superblock": true, 00:14:34.619 "num_base_bdevs": 4, 00:14:34.619 "num_base_bdevs_discovered": 3, 00:14:34.619 "num_base_bdevs_operational": 3, 00:14:34.619 "base_bdevs_list": [ 00:14:34.619 { 00:14:34.619 "name": null, 00:14:34.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.619 "is_configured": false, 00:14:34.619 "data_offset": 0, 00:14:34.619 "data_size": 63488 00:14:34.619 }, 00:14:34.619 { 00:14:34.619 "name": "BaseBdev2", 00:14:34.619 "uuid": "767ab23b-b283-52fa-8e53-8b857ef98791", 00:14:34.619 "is_configured": true, 00:14:34.619 "data_offset": 2048, 00:14:34.619 "data_size": 63488 00:14:34.619 }, 00:14:34.619 { 00:14:34.619 "name": "BaseBdev3", 00:14:34.619 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:34.619 "is_configured": true, 00:14:34.619 "data_offset": 2048, 00:14:34.619 "data_size": 63488 00:14:34.619 }, 00:14:34.619 { 00:14:34.619 "name": "BaseBdev4", 00:14:34.619 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:34.619 "is_configured": true, 00:14:34.619 "data_offset": 2048, 00:14:34.619 "data_size": 63488 00:14:34.619 } 00:14:34.619 ] 00:14:34.619 }' 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.619 19:01:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.188 "name": "raid_bdev1", 00:14:35.188 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:35.188 "strip_size_kb": 0, 00:14:35.188 "state": "online", 00:14:35.188 "raid_level": "raid1", 00:14:35.188 "superblock": true, 00:14:35.188 "num_base_bdevs": 4, 00:14:35.188 "num_base_bdevs_discovered": 3, 00:14:35.188 "num_base_bdevs_operational": 3, 00:14:35.188 "base_bdevs_list": [ 00:14:35.188 { 00:14:35.188 "name": null, 00:14:35.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.188 "is_configured": false, 00:14:35.188 "data_offset": 0, 00:14:35.188 "data_size": 63488 00:14:35.188 }, 00:14:35.188 { 00:14:35.188 "name": "BaseBdev2", 00:14:35.188 "uuid": "767ab23b-b283-52fa-8e53-8b857ef98791", 00:14:35.188 "is_configured": true, 00:14:35.188 "data_offset": 2048, 00:14:35.188 "data_size": 63488 00:14:35.188 }, 00:14:35.188 { 00:14:35.188 "name": "BaseBdev3", 00:14:35.188 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:35.188 "is_configured": true, 00:14:35.188 "data_offset": 2048, 00:14:35.188 "data_size": 63488 00:14:35.188 }, 00:14:35.188 { 00:14:35.188 "name": "BaseBdev4", 00:14:35.188 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:35.188 "is_configured": true, 00:14:35.188 "data_offset": 2048, 00:14:35.188 "data_size": 63488 00:14:35.188 } 00:14:35.188 ] 00:14:35.188 }' 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.188 [2024-11-26 19:01:26.535056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.188 [2024-11-26 19:01:26.548949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.188 19:01:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:35.188 [2024-11-26 19:01:26.551684] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.566 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.566 "name": "raid_bdev1", 00:14:36.566 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:36.566 "strip_size_kb": 0, 00:14:36.566 "state": "online", 00:14:36.566 "raid_level": "raid1", 00:14:36.566 "superblock": true, 00:14:36.566 "num_base_bdevs": 4, 00:14:36.566 "num_base_bdevs_discovered": 4, 00:14:36.566 "num_base_bdevs_operational": 4, 00:14:36.566 "process": { 00:14:36.566 "type": "rebuild", 00:14:36.566 "target": "spare", 00:14:36.566 "progress": { 00:14:36.566 "blocks": 20480, 00:14:36.566 "percent": 32 00:14:36.566 } 00:14:36.566 }, 00:14:36.566 "base_bdevs_list": [ 00:14:36.566 { 00:14:36.566 "name": "spare", 00:14:36.566 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:36.566 "is_configured": true, 00:14:36.566 "data_offset": 2048, 00:14:36.566 "data_size": 63488 00:14:36.566 }, 00:14:36.566 { 00:14:36.566 "name": "BaseBdev2", 00:14:36.566 "uuid": "767ab23b-b283-52fa-8e53-8b857ef98791", 00:14:36.566 "is_configured": true, 00:14:36.566 "data_offset": 2048, 00:14:36.566 "data_size": 63488 00:14:36.566 }, 00:14:36.566 { 00:14:36.566 "name": "BaseBdev3", 00:14:36.566 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:36.567 "is_configured": true, 00:14:36.567 "data_offset": 2048, 00:14:36.567 "data_size": 63488 00:14:36.567 }, 00:14:36.567 { 00:14:36.567 "name": "BaseBdev4", 00:14:36.567 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:36.567 "is_configured": true, 00:14:36.567 "data_offset": 2048, 00:14:36.567 "data_size": 63488 00:14:36.567 } 00:14:36.567 ] 00:14:36.567 }' 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:36.567 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.567 [2024-11-26 19:01:27.741411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.567 [2024-11-26 19:01:27.860611] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.567 "name": "raid_bdev1", 00:14:36.567 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:36.567 "strip_size_kb": 0, 00:14:36.567 "state": "online", 00:14:36.567 "raid_level": "raid1", 00:14:36.567 "superblock": true, 00:14:36.567 "num_base_bdevs": 4, 00:14:36.567 "num_base_bdevs_discovered": 3, 00:14:36.567 "num_base_bdevs_operational": 3, 00:14:36.567 "process": { 00:14:36.567 "type": "rebuild", 00:14:36.567 "target": "spare", 00:14:36.567 "progress": { 00:14:36.567 "blocks": 24576, 00:14:36.567 "percent": 38 00:14:36.567 } 00:14:36.567 }, 00:14:36.567 "base_bdevs_list": [ 00:14:36.567 { 00:14:36.567 "name": "spare", 00:14:36.567 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:36.567 "is_configured": true, 00:14:36.567 "data_offset": 2048, 00:14:36.567 "data_size": 63488 00:14:36.567 }, 00:14:36.567 { 00:14:36.567 "name": null, 00:14:36.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.567 "is_configured": false, 00:14:36.567 "data_offset": 0, 00:14:36.567 "data_size": 63488 00:14:36.567 }, 00:14:36.567 { 00:14:36.567 "name": "BaseBdev3", 00:14:36.567 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:36.567 "is_configured": true, 00:14:36.567 "data_offset": 2048, 00:14:36.567 "data_size": 63488 00:14:36.567 }, 00:14:36.567 { 00:14:36.567 "name": "BaseBdev4", 00:14:36.567 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:36.567 "is_configured": true, 00:14:36.567 "data_offset": 2048, 00:14:36.567 "data_size": 63488 00:14:36.567 } 00:14:36.567 ] 00:14:36.567 }' 00:14:36.567 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.826 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.826 19:01:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=507 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.826 "name": "raid_bdev1", 00:14:36.826 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:36.826 "strip_size_kb": 0, 00:14:36.826 "state": "online", 00:14:36.826 "raid_level": "raid1", 00:14:36.826 "superblock": true, 00:14:36.826 "num_base_bdevs": 4, 00:14:36.826 "num_base_bdevs_discovered": 3, 00:14:36.826 "num_base_bdevs_operational": 3, 00:14:36.826 "process": { 00:14:36.826 "type": "rebuild", 00:14:36.826 "target": "spare", 00:14:36.826 "progress": { 00:14:36.826 "blocks": 26624, 00:14:36.826 "percent": 41 00:14:36.826 } 00:14:36.826 }, 00:14:36.826 "base_bdevs_list": [ 00:14:36.826 { 00:14:36.826 "name": "spare", 00:14:36.826 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:36.826 "is_configured": true, 00:14:36.826 "data_offset": 2048, 00:14:36.826 "data_size": 63488 00:14:36.826 }, 00:14:36.826 { 00:14:36.826 "name": null, 00:14:36.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.826 "is_configured": false, 00:14:36.826 "data_offset": 0, 00:14:36.826 "data_size": 63488 00:14:36.826 }, 00:14:36.826 { 00:14:36.826 "name": "BaseBdev3", 00:14:36.826 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:36.826 "is_configured": true, 00:14:36.826 "data_offset": 2048, 00:14:36.826 "data_size": 63488 00:14:36.826 }, 00:14:36.826 { 00:14:36.826 "name": "BaseBdev4", 00:14:36.826 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:36.826 "is_configured": true, 00:14:36.826 "data_offset": 2048, 00:14:36.826 "data_size": 63488 00:14:36.826 } 00:14:36.826 ] 00:14:36.826 }' 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.826 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.086 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.086 19:01:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.022 "name": "raid_bdev1", 00:14:38.022 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:38.022 "strip_size_kb": 0, 00:14:38.022 "state": "online", 00:14:38.022 "raid_level": "raid1", 00:14:38.022 "superblock": true, 00:14:38.022 "num_base_bdevs": 4, 00:14:38.022 "num_base_bdevs_discovered": 3, 00:14:38.022 "num_base_bdevs_operational": 3, 00:14:38.022 "process": { 00:14:38.022 "type": "rebuild", 00:14:38.022 "target": "spare", 00:14:38.022 "progress": { 00:14:38.022 "blocks": 51200, 00:14:38.022 "percent": 80 00:14:38.022 } 00:14:38.022 }, 00:14:38.022 "base_bdevs_list": [ 00:14:38.022 { 00:14:38.022 "name": "spare", 00:14:38.022 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:38.022 "is_configured": true, 00:14:38.022 "data_offset": 2048, 00:14:38.022 "data_size": 63488 00:14:38.022 }, 00:14:38.022 { 00:14:38.022 "name": null, 00:14:38.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.022 "is_configured": false, 00:14:38.022 "data_offset": 0, 00:14:38.022 "data_size": 63488 00:14:38.022 }, 00:14:38.022 { 00:14:38.022 "name": "BaseBdev3", 00:14:38.022 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:38.022 "is_configured": true, 00:14:38.022 "data_offset": 2048, 00:14:38.022 "data_size": 63488 00:14:38.022 }, 00:14:38.022 { 00:14:38.022 "name": "BaseBdev4", 00:14:38.022 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:38.022 "is_configured": true, 00:14:38.022 "data_offset": 2048, 00:14:38.022 "data_size": 63488 00:14:38.022 } 00:14:38.022 ] 00:14:38.022 }' 00:14:38.022 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.023 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.023 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.023 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.023 19:01:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.591 [2024-11-26 19:01:29.774395] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:38.591 [2024-11-26 19:01:29.774496] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:38.591 [2024-11-26 19:01:29.774682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.159 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.159 "name": "raid_bdev1", 00:14:39.159 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:39.159 "strip_size_kb": 0, 00:14:39.159 "state": "online", 00:14:39.159 "raid_level": "raid1", 00:14:39.159 "superblock": true, 00:14:39.159 "num_base_bdevs": 4, 00:14:39.159 "num_base_bdevs_discovered": 3, 00:14:39.159 "num_base_bdevs_operational": 3, 00:14:39.159 "base_bdevs_list": [ 00:14:39.159 { 00:14:39.159 "name": "spare", 00:14:39.159 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:39.159 "is_configured": true, 00:14:39.159 "data_offset": 2048, 00:14:39.159 "data_size": 63488 00:14:39.159 }, 00:14:39.159 { 00:14:39.159 "name": null, 00:14:39.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.159 "is_configured": false, 00:14:39.159 "data_offset": 0, 00:14:39.159 "data_size": 63488 00:14:39.159 }, 00:14:39.159 { 00:14:39.159 "name": "BaseBdev3", 00:14:39.159 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:39.159 "is_configured": true, 00:14:39.159 "data_offset": 2048, 00:14:39.160 "data_size": 63488 00:14:39.160 }, 00:14:39.160 { 00:14:39.160 "name": "BaseBdev4", 00:14:39.160 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:39.160 "is_configured": true, 00:14:39.160 "data_offset": 2048, 00:14:39.160 "data_size": 63488 00:14:39.160 } 00:14:39.160 ] 00:14:39.160 }' 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.160 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.419 "name": "raid_bdev1", 00:14:39.419 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:39.419 "strip_size_kb": 0, 00:14:39.419 "state": "online", 00:14:39.419 "raid_level": "raid1", 00:14:39.419 "superblock": true, 00:14:39.419 "num_base_bdevs": 4, 00:14:39.419 "num_base_bdevs_discovered": 3, 00:14:39.419 "num_base_bdevs_operational": 3, 00:14:39.419 "base_bdevs_list": [ 00:14:39.419 { 00:14:39.419 "name": "spare", 00:14:39.419 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:39.419 "is_configured": true, 00:14:39.419 "data_offset": 2048, 00:14:39.419 "data_size": 63488 00:14:39.419 }, 00:14:39.419 { 00:14:39.419 "name": null, 00:14:39.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.419 "is_configured": false, 00:14:39.419 "data_offset": 0, 00:14:39.419 "data_size": 63488 00:14:39.419 }, 00:14:39.419 { 00:14:39.419 "name": "BaseBdev3", 00:14:39.419 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:39.419 "is_configured": true, 00:14:39.419 "data_offset": 2048, 00:14:39.419 "data_size": 63488 00:14:39.419 }, 00:14:39.419 { 00:14:39.419 "name": "BaseBdev4", 00:14:39.419 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:39.419 "is_configured": true, 00:14:39.419 "data_offset": 2048, 00:14:39.419 "data_size": 63488 00:14:39.419 } 00:14:39.419 ] 00:14:39.419 }' 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.419 "name": "raid_bdev1", 00:14:39.419 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:39.419 "strip_size_kb": 0, 00:14:39.419 "state": "online", 00:14:39.419 "raid_level": "raid1", 00:14:39.419 "superblock": true, 00:14:39.419 "num_base_bdevs": 4, 00:14:39.419 "num_base_bdevs_discovered": 3, 00:14:39.419 "num_base_bdevs_operational": 3, 00:14:39.419 "base_bdevs_list": [ 00:14:39.419 { 00:14:39.419 "name": "spare", 00:14:39.419 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:39.419 "is_configured": true, 00:14:39.419 "data_offset": 2048, 00:14:39.419 "data_size": 63488 00:14:39.419 }, 00:14:39.419 { 00:14:39.419 "name": null, 00:14:39.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.419 "is_configured": false, 00:14:39.419 "data_offset": 0, 00:14:39.419 "data_size": 63488 00:14:39.419 }, 00:14:39.419 { 00:14:39.419 "name": "BaseBdev3", 00:14:39.419 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:39.419 "is_configured": true, 00:14:39.419 "data_offset": 2048, 00:14:39.419 "data_size": 63488 00:14:39.419 }, 00:14:39.419 { 00:14:39.419 "name": "BaseBdev4", 00:14:39.419 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:39.419 "is_configured": true, 00:14:39.419 "data_offset": 2048, 00:14:39.419 "data_size": 63488 00:14:39.419 } 00:14:39.419 ] 00:14:39.419 }' 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.419 19:01:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.013 [2024-11-26 19:01:31.171028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.013 [2024-11-26 19:01:31.171068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.013 [2024-11-26 19:01:31.171175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.013 [2024-11-26 19:01:31.171352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.013 [2024-11-26 19:01:31.171369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:40.013 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:40.295 /dev/nbd0 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.295 1+0 records in 00:14:40.295 1+0 records out 00:14:40.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434466 s, 9.4 MB/s 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:40.295 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:40.553 /dev/nbd1 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.811 1+0 records in 00:14:40.811 1+0 records out 00:14:40.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382514 s, 10.7 MB/s 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:40.811 19:01:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:40.811 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:40.811 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.811 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:40.811 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:40.811 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:40.811 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.811 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.070 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.328 [2024-11-26 19:01:32.621600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:41.328 [2024-11-26 19:01:32.621718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.328 [2024-11-26 19:01:32.621752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:41.328 [2024-11-26 19:01:32.621767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.328 [2024-11-26 19:01:32.624867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.328 [2024-11-26 19:01:32.625054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:41.328 [2024-11-26 19:01:32.625196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:41.328 [2024-11-26 19:01:32.625267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.328 [2024-11-26 19:01:32.625455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.328 [2024-11-26 19:01:32.625596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:41.328 spare 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.328 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.586 [2024-11-26 19:01:32.725771] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:41.586 [2024-11-26 19:01:32.725833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:41.586 [2024-11-26 19:01:32.726427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:41.586 [2024-11-26 19:01:32.726736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:41.586 [2024-11-26 19:01:32.726755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:41.586 [2024-11-26 19:01:32.726991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.586 "name": "raid_bdev1", 00:14:41.586 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:41.586 "strip_size_kb": 0, 00:14:41.586 "state": "online", 00:14:41.586 "raid_level": "raid1", 00:14:41.586 "superblock": true, 00:14:41.586 "num_base_bdevs": 4, 00:14:41.586 "num_base_bdevs_discovered": 3, 00:14:41.586 "num_base_bdevs_operational": 3, 00:14:41.586 "base_bdevs_list": [ 00:14:41.586 { 00:14:41.586 "name": "spare", 00:14:41.586 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:41.586 "is_configured": true, 00:14:41.586 "data_offset": 2048, 00:14:41.586 "data_size": 63488 00:14:41.586 }, 00:14:41.586 { 00:14:41.586 "name": null, 00:14:41.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.586 "is_configured": false, 00:14:41.586 "data_offset": 2048, 00:14:41.586 "data_size": 63488 00:14:41.586 }, 00:14:41.586 { 00:14:41.586 "name": "BaseBdev3", 00:14:41.586 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:41.586 "is_configured": true, 00:14:41.586 "data_offset": 2048, 00:14:41.586 "data_size": 63488 00:14:41.586 }, 00:14:41.586 { 00:14:41.586 "name": "BaseBdev4", 00:14:41.586 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:41.586 "is_configured": true, 00:14:41.586 "data_offset": 2048, 00:14:41.586 "data_size": 63488 00:14:41.586 } 00:14:41.586 ] 00:14:41.586 }' 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.586 19:01:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.153 "name": "raid_bdev1", 00:14:42.153 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:42.153 "strip_size_kb": 0, 00:14:42.153 "state": "online", 00:14:42.153 "raid_level": "raid1", 00:14:42.153 "superblock": true, 00:14:42.153 "num_base_bdevs": 4, 00:14:42.153 "num_base_bdevs_discovered": 3, 00:14:42.153 "num_base_bdevs_operational": 3, 00:14:42.153 "base_bdevs_list": [ 00:14:42.153 { 00:14:42.153 "name": "spare", 00:14:42.153 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:42.153 "is_configured": true, 00:14:42.153 "data_offset": 2048, 00:14:42.153 "data_size": 63488 00:14:42.153 }, 00:14:42.153 { 00:14:42.153 "name": null, 00:14:42.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.153 "is_configured": false, 00:14:42.153 "data_offset": 2048, 00:14:42.153 "data_size": 63488 00:14:42.153 }, 00:14:42.153 { 00:14:42.153 "name": "BaseBdev3", 00:14:42.153 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:42.153 "is_configured": true, 00:14:42.153 "data_offset": 2048, 00:14:42.153 "data_size": 63488 00:14:42.153 }, 00:14:42.153 { 00:14:42.153 "name": "BaseBdev4", 00:14:42.153 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:42.153 "is_configured": true, 00:14:42.153 "data_offset": 2048, 00:14:42.153 "data_size": 63488 00:14:42.153 } 00:14:42.153 ] 00:14:42.153 }' 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.153 [2024-11-26 19:01:33.450043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.153 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.153 "name": "raid_bdev1", 00:14:42.153 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:42.153 "strip_size_kb": 0, 00:14:42.153 "state": "online", 00:14:42.153 "raid_level": "raid1", 00:14:42.153 "superblock": true, 00:14:42.153 "num_base_bdevs": 4, 00:14:42.153 "num_base_bdevs_discovered": 2, 00:14:42.153 "num_base_bdevs_operational": 2, 00:14:42.153 "base_bdevs_list": [ 00:14:42.153 { 00:14:42.153 "name": null, 00:14:42.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.153 "is_configured": false, 00:14:42.153 "data_offset": 0, 00:14:42.153 "data_size": 63488 00:14:42.153 }, 00:14:42.153 { 00:14:42.153 "name": null, 00:14:42.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.154 "is_configured": false, 00:14:42.154 "data_offset": 2048, 00:14:42.154 "data_size": 63488 00:14:42.154 }, 00:14:42.154 { 00:14:42.154 "name": "BaseBdev3", 00:14:42.154 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:42.154 "is_configured": true, 00:14:42.154 "data_offset": 2048, 00:14:42.154 "data_size": 63488 00:14:42.154 }, 00:14:42.154 { 00:14:42.154 "name": "BaseBdev4", 00:14:42.154 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:42.154 "is_configured": true, 00:14:42.154 "data_offset": 2048, 00:14:42.154 "data_size": 63488 00:14:42.154 } 00:14:42.154 ] 00:14:42.154 }' 00:14:42.154 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.154 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.720 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:42.720 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.720 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.720 [2024-11-26 19:01:33.966263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.720 [2024-11-26 19:01:33.966596] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:42.720 [2024-11-26 19:01:33.966618] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:42.720 [2024-11-26 19:01:33.966670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.720 [2024-11-26 19:01:33.980622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:42.720 19:01:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.720 19:01:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:42.720 [2024-11-26 19:01:33.983573] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.657 19:01:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.657 19:01:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.657 19:01:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.657 19:01:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.657 19:01:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.657 19:01:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.657 19:01:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.657 19:01:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.657 19:01:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.657 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.916 "name": "raid_bdev1", 00:14:43.916 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:43.916 "strip_size_kb": 0, 00:14:43.916 "state": "online", 00:14:43.916 "raid_level": "raid1", 00:14:43.916 "superblock": true, 00:14:43.916 "num_base_bdevs": 4, 00:14:43.916 "num_base_bdevs_discovered": 3, 00:14:43.916 "num_base_bdevs_operational": 3, 00:14:43.916 "process": { 00:14:43.916 "type": "rebuild", 00:14:43.916 "target": "spare", 00:14:43.916 "progress": { 00:14:43.916 "blocks": 20480, 00:14:43.916 "percent": 32 00:14:43.916 } 00:14:43.916 }, 00:14:43.916 "base_bdevs_list": [ 00:14:43.916 { 00:14:43.916 "name": "spare", 00:14:43.916 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:43.916 "is_configured": true, 00:14:43.916 "data_offset": 2048, 00:14:43.916 "data_size": 63488 00:14:43.916 }, 00:14:43.916 { 00:14:43.916 "name": null, 00:14:43.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.916 "is_configured": false, 00:14:43.916 "data_offset": 2048, 00:14:43.916 "data_size": 63488 00:14:43.916 }, 00:14:43.916 { 00:14:43.916 "name": "BaseBdev3", 00:14:43.916 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:43.916 "is_configured": true, 00:14:43.916 "data_offset": 2048, 00:14:43.916 "data_size": 63488 00:14:43.916 }, 00:14:43.916 { 00:14:43.916 "name": "BaseBdev4", 00:14:43.916 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:43.916 "is_configured": true, 00:14:43.916 "data_offset": 2048, 00:14:43.916 "data_size": 63488 00:14:43.916 } 00:14:43.916 ] 00:14:43.916 }' 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.916 [2024-11-26 19:01:35.149481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.916 [2024-11-26 19:01:35.192661] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:43.916 [2024-11-26 19:01:35.192978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.916 [2024-11-26 19:01:35.193016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.916 [2024-11-26 19:01:35.193030] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.916 "name": "raid_bdev1", 00:14:43.916 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:43.916 "strip_size_kb": 0, 00:14:43.916 "state": "online", 00:14:43.916 "raid_level": "raid1", 00:14:43.916 "superblock": true, 00:14:43.916 "num_base_bdevs": 4, 00:14:43.916 "num_base_bdevs_discovered": 2, 00:14:43.916 "num_base_bdevs_operational": 2, 00:14:43.916 "base_bdevs_list": [ 00:14:43.916 { 00:14:43.916 "name": null, 00:14:43.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.916 "is_configured": false, 00:14:43.916 "data_offset": 0, 00:14:43.916 "data_size": 63488 00:14:43.916 }, 00:14:43.916 { 00:14:43.916 "name": null, 00:14:43.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.916 "is_configured": false, 00:14:43.916 "data_offset": 2048, 00:14:43.916 "data_size": 63488 00:14:43.916 }, 00:14:43.916 { 00:14:43.916 "name": "BaseBdev3", 00:14:43.916 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:43.916 "is_configured": true, 00:14:43.916 "data_offset": 2048, 00:14:43.916 "data_size": 63488 00:14:43.916 }, 00:14:43.916 { 00:14:43.916 "name": "BaseBdev4", 00:14:43.916 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:43.916 "is_configured": true, 00:14:43.916 "data_offset": 2048, 00:14:43.916 "data_size": 63488 00:14:43.916 } 00:14:43.916 ] 00:14:43.916 }' 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.916 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.484 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.484 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.484 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.484 [2024-11-26 19:01:35.773054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.484 [2024-11-26 19:01:35.773258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.484 [2024-11-26 19:01:35.773417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:44.484 [2024-11-26 19:01:35.773580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.484 [2024-11-26 19:01:35.774363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.484 [2024-11-26 19:01:35.774514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.484 [2024-11-26 19:01:35.774782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:44.484 [2024-11-26 19:01:35.774943] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:44.484 [2024-11-26 19:01:35.774978] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:44.484 [2024-11-26 19:01:35.775032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.484 [2024-11-26 19:01:35.789170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:44.484 spare 00:14:44.484 19:01:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.484 19:01:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:44.484 [2024-11-26 19:01:35.791761] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.862 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.863 "name": "raid_bdev1", 00:14:45.863 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:45.863 "strip_size_kb": 0, 00:14:45.863 "state": "online", 00:14:45.863 "raid_level": "raid1", 00:14:45.863 "superblock": true, 00:14:45.863 "num_base_bdevs": 4, 00:14:45.863 "num_base_bdevs_discovered": 3, 00:14:45.863 "num_base_bdevs_operational": 3, 00:14:45.863 "process": { 00:14:45.863 "type": "rebuild", 00:14:45.863 "target": "spare", 00:14:45.863 "progress": { 00:14:45.863 "blocks": 20480, 00:14:45.863 "percent": 32 00:14:45.863 } 00:14:45.863 }, 00:14:45.863 "base_bdevs_list": [ 00:14:45.863 { 00:14:45.863 "name": "spare", 00:14:45.863 "uuid": "4d9dcb3c-5343-5ad8-9cfb-30933c591105", 00:14:45.863 "is_configured": true, 00:14:45.863 "data_offset": 2048, 00:14:45.863 "data_size": 63488 00:14:45.863 }, 00:14:45.863 { 00:14:45.863 "name": null, 00:14:45.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.863 "is_configured": false, 00:14:45.863 "data_offset": 2048, 00:14:45.863 "data_size": 63488 00:14:45.863 }, 00:14:45.863 { 00:14:45.863 "name": "BaseBdev3", 00:14:45.863 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:45.863 "is_configured": true, 00:14:45.863 "data_offset": 2048, 00:14:45.863 "data_size": 63488 00:14:45.863 }, 00:14:45.863 { 00:14:45.863 "name": "BaseBdev4", 00:14:45.863 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:45.863 "is_configured": true, 00:14:45.863 "data_offset": 2048, 00:14:45.863 "data_size": 63488 00:14:45.863 } 00:14:45.863 ] 00:14:45.863 }' 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.863 19:01:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.863 [2024-11-26 19:01:36.968998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.863 [2024-11-26 19:01:37.000953] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:45.863 [2024-11-26 19:01:37.001046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.863 [2024-11-26 19:01:37.001071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.863 [2024-11-26 19:01:37.001085] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.863 "name": "raid_bdev1", 00:14:45.863 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:45.863 "strip_size_kb": 0, 00:14:45.863 "state": "online", 00:14:45.863 "raid_level": "raid1", 00:14:45.863 "superblock": true, 00:14:45.863 "num_base_bdevs": 4, 00:14:45.863 "num_base_bdevs_discovered": 2, 00:14:45.863 "num_base_bdevs_operational": 2, 00:14:45.863 "base_bdevs_list": [ 00:14:45.863 { 00:14:45.863 "name": null, 00:14:45.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.863 "is_configured": false, 00:14:45.863 "data_offset": 0, 00:14:45.863 "data_size": 63488 00:14:45.863 }, 00:14:45.863 { 00:14:45.863 "name": null, 00:14:45.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.863 "is_configured": false, 00:14:45.863 "data_offset": 2048, 00:14:45.863 "data_size": 63488 00:14:45.863 }, 00:14:45.863 { 00:14:45.863 "name": "BaseBdev3", 00:14:45.863 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:45.863 "is_configured": true, 00:14:45.863 "data_offset": 2048, 00:14:45.863 "data_size": 63488 00:14:45.863 }, 00:14:45.863 { 00:14:45.863 "name": "BaseBdev4", 00:14:45.863 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:45.863 "is_configured": true, 00:14:45.863 "data_offset": 2048, 00:14:45.863 "data_size": 63488 00:14:45.863 } 00:14:45.863 ] 00:14:45.863 }' 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.863 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.432 "name": "raid_bdev1", 00:14:46.432 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:46.432 "strip_size_kb": 0, 00:14:46.432 "state": "online", 00:14:46.432 "raid_level": "raid1", 00:14:46.432 "superblock": true, 00:14:46.432 "num_base_bdevs": 4, 00:14:46.432 "num_base_bdevs_discovered": 2, 00:14:46.432 "num_base_bdevs_operational": 2, 00:14:46.432 "base_bdevs_list": [ 00:14:46.432 { 00:14:46.432 "name": null, 00:14:46.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.432 "is_configured": false, 00:14:46.432 "data_offset": 0, 00:14:46.432 "data_size": 63488 00:14:46.432 }, 00:14:46.432 { 00:14:46.432 "name": null, 00:14:46.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.432 "is_configured": false, 00:14:46.432 "data_offset": 2048, 00:14:46.432 "data_size": 63488 00:14:46.432 }, 00:14:46.432 { 00:14:46.432 "name": "BaseBdev3", 00:14:46.432 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:46.432 "is_configured": true, 00:14:46.432 "data_offset": 2048, 00:14:46.432 "data_size": 63488 00:14:46.432 }, 00:14:46.432 { 00:14:46.432 "name": "BaseBdev4", 00:14:46.432 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:46.432 "is_configured": true, 00:14:46.432 "data_offset": 2048, 00:14:46.432 "data_size": 63488 00:14:46.432 } 00:14:46.432 ] 00:14:46.432 }' 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.432 [2024-11-26 19:01:37.698439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:46.432 [2024-11-26 19:01:37.698523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.432 [2024-11-26 19:01:37.698568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:46.432 [2024-11-26 19:01:37.698587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.432 [2024-11-26 19:01:37.699211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.432 [2024-11-26 19:01:37.699249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:46.432 [2024-11-26 19:01:37.699349] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:46.432 [2024-11-26 19:01:37.699384] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:46.432 [2024-11-26 19:01:37.699397] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:46.432 [2024-11-26 19:01:37.699428] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:46.432 BaseBdev1 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.432 19:01:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.373 19:01:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.633 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.633 "name": "raid_bdev1", 00:14:47.633 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:47.633 "strip_size_kb": 0, 00:14:47.633 "state": "online", 00:14:47.633 "raid_level": "raid1", 00:14:47.633 "superblock": true, 00:14:47.633 "num_base_bdevs": 4, 00:14:47.633 "num_base_bdevs_discovered": 2, 00:14:47.633 "num_base_bdevs_operational": 2, 00:14:47.633 "base_bdevs_list": [ 00:14:47.633 { 00:14:47.633 "name": null, 00:14:47.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.633 "is_configured": false, 00:14:47.633 "data_offset": 0, 00:14:47.633 "data_size": 63488 00:14:47.633 }, 00:14:47.633 { 00:14:47.633 "name": null, 00:14:47.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.633 "is_configured": false, 00:14:47.633 "data_offset": 2048, 00:14:47.633 "data_size": 63488 00:14:47.633 }, 00:14:47.633 { 00:14:47.633 "name": "BaseBdev3", 00:14:47.633 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:47.633 "is_configured": true, 00:14:47.633 "data_offset": 2048, 00:14:47.633 "data_size": 63488 00:14:47.633 }, 00:14:47.633 { 00:14:47.633 "name": "BaseBdev4", 00:14:47.633 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:47.633 "is_configured": true, 00:14:47.633 "data_offset": 2048, 00:14:47.633 "data_size": 63488 00:14:47.633 } 00:14:47.633 ] 00:14:47.633 }' 00:14:47.633 19:01:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.633 19:01:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.892 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.892 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.892 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.892 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.892 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.892 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.892 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.892 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.892 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.151 "name": "raid_bdev1", 00:14:48.151 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:48.151 "strip_size_kb": 0, 00:14:48.151 "state": "online", 00:14:48.151 "raid_level": "raid1", 00:14:48.151 "superblock": true, 00:14:48.151 "num_base_bdevs": 4, 00:14:48.151 "num_base_bdevs_discovered": 2, 00:14:48.151 "num_base_bdevs_operational": 2, 00:14:48.151 "base_bdevs_list": [ 00:14:48.151 { 00:14:48.151 "name": null, 00:14:48.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.151 "is_configured": false, 00:14:48.151 "data_offset": 0, 00:14:48.151 "data_size": 63488 00:14:48.151 }, 00:14:48.151 { 00:14:48.151 "name": null, 00:14:48.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.151 "is_configured": false, 00:14:48.151 "data_offset": 2048, 00:14:48.151 "data_size": 63488 00:14:48.151 }, 00:14:48.151 { 00:14:48.151 "name": "BaseBdev3", 00:14:48.151 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:48.151 "is_configured": true, 00:14:48.151 "data_offset": 2048, 00:14:48.151 "data_size": 63488 00:14:48.151 }, 00:14:48.151 { 00:14:48.151 "name": "BaseBdev4", 00:14:48.151 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:48.151 "is_configured": true, 00:14:48.151 "data_offset": 2048, 00:14:48.151 "data_size": 63488 00:14:48.151 } 00:14:48.151 ] 00:14:48.151 }' 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:48.151 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.152 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.152 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.152 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.152 [2024-11-26 19:01:39.411005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.152 [2024-11-26 19:01:39.411264] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:48.152 [2024-11-26 19:01:39.411285] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:48.152 request: 00:14:48.152 { 00:14:48.152 "base_bdev": "BaseBdev1", 00:14:48.152 "raid_bdev": "raid_bdev1", 00:14:48.152 "method": "bdev_raid_add_base_bdev", 00:14:48.152 "req_id": 1 00:14:48.152 } 00:14:48.152 Got JSON-RPC error response 00:14:48.152 response: 00:14:48.152 { 00:14:48.152 "code": -22, 00:14:48.152 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:48.152 } 00:14:48.152 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:48.152 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:48.152 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.152 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.152 19:01:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.152 19:01:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.088 19:01:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.347 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.347 "name": "raid_bdev1", 00:14:49.347 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:49.347 "strip_size_kb": 0, 00:14:49.347 "state": "online", 00:14:49.347 "raid_level": "raid1", 00:14:49.347 "superblock": true, 00:14:49.347 "num_base_bdevs": 4, 00:14:49.347 "num_base_bdevs_discovered": 2, 00:14:49.347 "num_base_bdevs_operational": 2, 00:14:49.347 "base_bdevs_list": [ 00:14:49.347 { 00:14:49.347 "name": null, 00:14:49.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.347 "is_configured": false, 00:14:49.347 "data_offset": 0, 00:14:49.347 "data_size": 63488 00:14:49.347 }, 00:14:49.347 { 00:14:49.347 "name": null, 00:14:49.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.348 "is_configured": false, 00:14:49.348 "data_offset": 2048, 00:14:49.348 "data_size": 63488 00:14:49.348 }, 00:14:49.348 { 00:14:49.348 "name": "BaseBdev3", 00:14:49.348 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:49.348 "is_configured": true, 00:14:49.348 "data_offset": 2048, 00:14:49.348 "data_size": 63488 00:14:49.348 }, 00:14:49.348 { 00:14:49.348 "name": "BaseBdev4", 00:14:49.348 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:49.348 "is_configured": true, 00:14:49.348 "data_offset": 2048, 00:14:49.348 "data_size": 63488 00:14:49.348 } 00:14:49.348 ] 00:14:49.348 }' 00:14:49.348 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.348 19:01:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.607 19:01:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.866 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.866 "name": "raid_bdev1", 00:14:49.866 "uuid": "1adc83f2-1398-4b14-a624-1083cfd09a25", 00:14:49.866 "strip_size_kb": 0, 00:14:49.866 "state": "online", 00:14:49.866 "raid_level": "raid1", 00:14:49.866 "superblock": true, 00:14:49.866 "num_base_bdevs": 4, 00:14:49.866 "num_base_bdevs_discovered": 2, 00:14:49.866 "num_base_bdevs_operational": 2, 00:14:49.866 "base_bdevs_list": [ 00:14:49.866 { 00:14:49.866 "name": null, 00:14:49.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.866 "is_configured": false, 00:14:49.866 "data_offset": 0, 00:14:49.866 "data_size": 63488 00:14:49.866 }, 00:14:49.866 { 00:14:49.866 "name": null, 00:14:49.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.866 "is_configured": false, 00:14:49.866 "data_offset": 2048, 00:14:49.866 "data_size": 63488 00:14:49.866 }, 00:14:49.866 { 00:14:49.866 "name": "BaseBdev3", 00:14:49.866 "uuid": "b083d7ab-8fac-51ba-9667-64898eabf273", 00:14:49.866 "is_configured": true, 00:14:49.866 "data_offset": 2048, 00:14:49.866 "data_size": 63488 00:14:49.866 }, 00:14:49.866 { 00:14:49.866 "name": "BaseBdev4", 00:14:49.866 "uuid": "fc0c7a60-4950-590d-9c5c-956e06aff5bb", 00:14:49.866 "is_configured": true, 00:14:49.866 "data_offset": 2048, 00:14:49.866 "data_size": 63488 00:14:49.866 } 00:14:49.866 ] 00:14:49.866 }' 00:14:49.866 19:01:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78306 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78306 ']' 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78306 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78306 00:14:49.866 killing process with pid 78306 00:14:49.866 Received shutdown signal, test time was about 60.000000 seconds 00:14:49.866 00:14:49.866 Latency(us) 00:14:49.866 [2024-11-26T19:01:41.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.866 [2024-11-26T19:01:41.233Z] =================================================================================================================== 00:14:49.866 [2024-11-26T19:01:41.233Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78306' 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78306 00:14:49.866 19:01:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78306 00:14:49.866 [2024-11-26 19:01:41.106508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.866 [2024-11-26 19:01:41.106673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.866 [2024-11-26 19:01:41.106767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.866 [2024-11-26 19:01:41.106790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:50.434 [2024-11-26 19:01:41.541032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:51.371 00:14:51.371 real 0m28.945s 00:14:51.371 user 0m35.278s 00:14:51.371 sys 0m4.087s 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.371 ************************************ 00:14:51.371 END TEST raid_rebuild_test_sb 00:14:51.371 ************************************ 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.371 19:01:42 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:51.371 19:01:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:51.371 19:01:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.371 19:01:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.371 ************************************ 00:14:51.371 START TEST raid_rebuild_test_io 00:14:51.371 ************************************ 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79100 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79100 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79100 ']' 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.371 19:01:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.630 [2024-11-26 19:01:42.780739] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:14:51.630 [2024-11-26 19:01:42.781203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79100 ] 00:14:51.630 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:51.630 Zero copy mechanism will not be used. 00:14:51.630 [2024-11-26 19:01:42.975123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.888 [2024-11-26 19:01:43.161550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.147 [2024-11-26 19:01:43.380740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.147 [2024-11-26 19:01:43.380783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.713 BaseBdev1_malloc 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.713 [2024-11-26 19:01:43.850660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:52.713 [2024-11-26 19:01:43.850764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.713 [2024-11-26 19:01:43.850795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:52.713 [2024-11-26 19:01:43.850813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.713 [2024-11-26 19:01:43.854015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.713 [2024-11-26 19:01:43.854064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.713 BaseBdev1 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.713 BaseBdev2_malloc 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.713 [2024-11-26 19:01:43.902532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:52.713 [2024-11-26 19:01:43.902619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.713 [2024-11-26 19:01:43.902666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:52.713 [2024-11-26 19:01:43.902682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.713 [2024-11-26 19:01:43.905627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.713 [2024-11-26 19:01:43.905688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:52.713 BaseBdev2 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.713 BaseBdev3_malloc 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.713 [2024-11-26 19:01:43.972992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:52.713 [2024-11-26 19:01:43.973214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.713 [2024-11-26 19:01:43.973258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:52.713 [2024-11-26 19:01:43.973279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.713 [2024-11-26 19:01:43.976105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.713 [2024-11-26 19:01:43.976156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:52.713 BaseBdev3 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.713 19:01:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.713 BaseBdev4_malloc 00:14:52.713 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.713 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:52.713 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.713 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.713 [2024-11-26 19:01:44.027171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:52.713 [2024-11-26 19:01:44.027263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.713 [2024-11-26 19:01:44.027295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:52.713 [2024-11-26 19:01:44.027313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.713 [2024-11-26 19:01:44.030264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.713 [2024-11-26 19:01:44.030316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:52.713 BaseBdev4 00:14:52.714 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.714 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:52.714 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.714 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.714 spare_malloc 00:14:52.714 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.714 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:52.714 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.714 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.973 spare_delay 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.973 [2024-11-26 19:01:44.089491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:52.973 [2024-11-26 19:01:44.089562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.973 [2024-11-26 19:01:44.089591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:52.973 [2024-11-26 19:01:44.089609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.973 [2024-11-26 19:01:44.092537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.973 [2024-11-26 19:01:44.092759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:52.973 spare 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.973 [2024-11-26 19:01:44.101652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.973 [2024-11-26 19:01:44.104261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.973 [2024-11-26 19:01:44.104358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.973 [2024-11-26 19:01:44.104431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.973 [2024-11-26 19:01:44.104532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:52.973 [2024-11-26 19:01:44.104554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:52.973 [2024-11-26 19:01:44.104848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:52.973 [2024-11-26 19:01:44.105100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:52.973 [2024-11-26 19:01:44.105120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:52.973 [2024-11-26 19:01:44.105342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.973 "name": "raid_bdev1", 00:14:52.973 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:14:52.973 "strip_size_kb": 0, 00:14:52.973 "state": "online", 00:14:52.973 "raid_level": "raid1", 00:14:52.973 "superblock": false, 00:14:52.973 "num_base_bdevs": 4, 00:14:52.973 "num_base_bdevs_discovered": 4, 00:14:52.973 "num_base_bdevs_operational": 4, 00:14:52.973 "base_bdevs_list": [ 00:14:52.973 { 00:14:52.973 "name": "BaseBdev1", 00:14:52.973 "uuid": "36329d9a-91f0-5735-89c4-34ad55ca6032", 00:14:52.973 "is_configured": true, 00:14:52.973 "data_offset": 0, 00:14:52.973 "data_size": 65536 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "name": "BaseBdev2", 00:14:52.973 "uuid": "557dda96-12b0-5cf6-b84a-5009073e9e95", 00:14:52.973 "is_configured": true, 00:14:52.973 "data_offset": 0, 00:14:52.973 "data_size": 65536 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "name": "BaseBdev3", 00:14:52.973 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:14:52.973 "is_configured": true, 00:14:52.973 "data_offset": 0, 00:14:52.973 "data_size": 65536 00:14:52.973 }, 00:14:52.973 { 00:14:52.973 "name": "BaseBdev4", 00:14:52.973 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:14:52.973 "is_configured": true, 00:14:52.973 "data_offset": 0, 00:14:52.973 "data_size": 65536 00:14:52.973 } 00:14:52.973 ] 00:14:52.973 }' 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.973 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.299 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:53.299 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.299 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.299 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.299 [2024-11-26 19:01:44.618356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.299 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 [2024-11-26 19:01:44.725856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.562 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.562 "name": "raid_bdev1", 00:14:53.563 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:14:53.563 "strip_size_kb": 0, 00:14:53.563 "state": "online", 00:14:53.563 "raid_level": "raid1", 00:14:53.563 "superblock": false, 00:14:53.563 "num_base_bdevs": 4, 00:14:53.563 "num_base_bdevs_discovered": 3, 00:14:53.563 "num_base_bdevs_operational": 3, 00:14:53.563 "base_bdevs_list": [ 00:14:53.563 { 00:14:53.563 "name": null, 00:14:53.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.563 "is_configured": false, 00:14:53.563 "data_offset": 0, 00:14:53.563 "data_size": 65536 00:14:53.563 }, 00:14:53.563 { 00:14:53.563 "name": "BaseBdev2", 00:14:53.563 "uuid": "557dda96-12b0-5cf6-b84a-5009073e9e95", 00:14:53.563 "is_configured": true, 00:14:53.563 "data_offset": 0, 00:14:53.563 "data_size": 65536 00:14:53.563 }, 00:14:53.563 { 00:14:53.563 "name": "BaseBdev3", 00:14:53.563 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:14:53.563 "is_configured": true, 00:14:53.563 "data_offset": 0, 00:14:53.563 "data_size": 65536 00:14:53.563 }, 00:14:53.563 { 00:14:53.563 "name": "BaseBdev4", 00:14:53.563 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:14:53.563 "is_configured": true, 00:14:53.563 "data_offset": 0, 00:14:53.563 "data_size": 65536 00:14:53.563 } 00:14:53.563 ] 00:14:53.563 }' 00:14:53.563 19:01:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.563 19:01:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.563 [2024-11-26 19:01:44.854309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:53.563 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:53.563 Zero copy mechanism will not be used. 00:14:53.563 Running I/O for 60 seconds... 00:14:54.130 19:01:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:54.130 19:01:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.130 19:01:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.130 [2024-11-26 19:01:45.260425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.130 19:01:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.130 19:01:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:54.130 [2024-11-26 19:01:45.347134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:54.130 [2024-11-26 19:01:45.350084] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.130 [2024-11-26 19:01:45.482373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:54.130 [2024-11-26 19:01:45.484224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:54.390 [2024-11-26 19:01:45.716202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:54.390 [2024-11-26 19:01:45.717401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:54.908 130.00 IOPS, 390.00 MiB/s [2024-11-26T19:01:46.275Z] [2024-11-26 19:01:46.237442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.167 "name": "raid_bdev1", 00:14:55.167 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:14:55.167 "strip_size_kb": 0, 00:14:55.167 "state": "online", 00:14:55.167 "raid_level": "raid1", 00:14:55.167 "superblock": false, 00:14:55.167 "num_base_bdevs": 4, 00:14:55.167 "num_base_bdevs_discovered": 4, 00:14:55.167 "num_base_bdevs_operational": 4, 00:14:55.167 "process": { 00:14:55.167 "type": "rebuild", 00:14:55.167 "target": "spare", 00:14:55.167 "progress": { 00:14:55.167 "blocks": 10240, 00:14:55.167 "percent": 15 00:14:55.167 } 00:14:55.167 }, 00:14:55.167 "base_bdevs_list": [ 00:14:55.167 { 00:14:55.167 "name": "spare", 00:14:55.167 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:14:55.167 "is_configured": true, 00:14:55.167 "data_offset": 0, 00:14:55.167 "data_size": 65536 00:14:55.167 }, 00:14:55.167 { 00:14:55.167 "name": "BaseBdev2", 00:14:55.167 "uuid": "557dda96-12b0-5cf6-b84a-5009073e9e95", 00:14:55.167 "is_configured": true, 00:14:55.167 "data_offset": 0, 00:14:55.167 "data_size": 65536 00:14:55.167 }, 00:14:55.167 { 00:14:55.167 "name": "BaseBdev3", 00:14:55.167 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:14:55.167 "is_configured": true, 00:14:55.167 "data_offset": 0, 00:14:55.167 "data_size": 65536 00:14:55.167 }, 00:14:55.167 { 00:14:55.167 "name": "BaseBdev4", 00:14:55.167 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:14:55.167 "is_configured": true, 00:14:55.167 "data_offset": 0, 00:14:55.167 "data_size": 65536 00:14:55.167 } 00:14:55.167 ] 00:14:55.167 }' 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.167 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.168 [2024-11-26 19:01:46.479611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:55.168 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.168 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:55.168 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.168 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.168 [2024-11-26 19:01:46.513869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.427 [2024-11-26 19:01:46.758312] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:55.427 [2024-11-26 19:01:46.772003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.427 [2024-11-26 19:01:46.772239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.427 [2024-11-26 19:01:46.772286] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:55.686 [2024-11-26 19:01:46.805045] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.686 108.50 IOPS, 325.50 MiB/s [2024-11-26T19:01:47.053Z] 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.686 "name": "raid_bdev1", 00:14:55.686 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:14:55.686 "strip_size_kb": 0, 00:14:55.686 "state": "online", 00:14:55.686 "raid_level": "raid1", 00:14:55.686 "superblock": false, 00:14:55.686 "num_base_bdevs": 4, 00:14:55.686 "num_base_bdevs_discovered": 3, 00:14:55.686 "num_base_bdevs_operational": 3, 00:14:55.686 "base_bdevs_list": [ 00:14:55.686 { 00:14:55.686 "name": null, 00:14:55.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.686 "is_configured": false, 00:14:55.686 "data_offset": 0, 00:14:55.686 "data_size": 65536 00:14:55.686 }, 00:14:55.686 { 00:14:55.686 "name": "BaseBdev2", 00:14:55.686 "uuid": "557dda96-12b0-5cf6-b84a-5009073e9e95", 00:14:55.686 "is_configured": true, 00:14:55.686 "data_offset": 0, 00:14:55.686 "data_size": 65536 00:14:55.686 }, 00:14:55.686 { 00:14:55.686 "name": "BaseBdev3", 00:14:55.686 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:14:55.686 "is_configured": true, 00:14:55.686 "data_offset": 0, 00:14:55.686 "data_size": 65536 00:14:55.686 }, 00:14:55.686 { 00:14:55.686 "name": "BaseBdev4", 00:14:55.686 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:14:55.686 "is_configured": true, 00:14:55.686 "data_offset": 0, 00:14:55.686 "data_size": 65536 00:14:55.686 } 00:14:55.686 ] 00:14:55.686 }' 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.686 19:01:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.255 "name": "raid_bdev1", 00:14:56.255 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:14:56.255 "strip_size_kb": 0, 00:14:56.255 "state": "online", 00:14:56.255 "raid_level": "raid1", 00:14:56.255 "superblock": false, 00:14:56.255 "num_base_bdevs": 4, 00:14:56.255 "num_base_bdevs_discovered": 3, 00:14:56.255 "num_base_bdevs_operational": 3, 00:14:56.255 "base_bdevs_list": [ 00:14:56.255 { 00:14:56.255 "name": null, 00:14:56.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.255 "is_configured": false, 00:14:56.255 "data_offset": 0, 00:14:56.255 "data_size": 65536 00:14:56.255 }, 00:14:56.255 { 00:14:56.255 "name": "BaseBdev2", 00:14:56.255 "uuid": "557dda96-12b0-5cf6-b84a-5009073e9e95", 00:14:56.255 "is_configured": true, 00:14:56.255 "data_offset": 0, 00:14:56.255 "data_size": 65536 00:14:56.255 }, 00:14:56.255 { 00:14:56.255 "name": "BaseBdev3", 00:14:56.255 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:14:56.255 "is_configured": true, 00:14:56.255 "data_offset": 0, 00:14:56.255 "data_size": 65536 00:14:56.255 }, 00:14:56.255 { 00:14:56.255 "name": "BaseBdev4", 00:14:56.255 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:14:56.255 "is_configured": true, 00:14:56.255 "data_offset": 0, 00:14:56.255 "data_size": 65536 00:14:56.255 } 00:14:56.255 ] 00:14:56.255 }' 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.255 [2024-11-26 19:01:47.522431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.255 19:01:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:56.255 [2024-11-26 19:01:47.616151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:56.255 [2024-11-26 19:01:47.619308] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.514 [2024-11-26 19:01:47.730134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:56.514 [2024-11-26 19:01:47.731091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:56.773 130.00 IOPS, 390.00 MiB/s [2024-11-26T19:01:48.140Z] [2024-11-26 19:01:47.966823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:56.773 [2024-11-26 19:01:47.967824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:57.032 [2024-11-26 19:01:48.352752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:57.292 [2024-11-26 19:01:48.574647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.292 "name": "raid_bdev1", 00:14:57.292 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:14:57.292 "strip_size_kb": 0, 00:14:57.292 "state": "online", 00:14:57.292 "raid_level": "raid1", 00:14:57.292 "superblock": false, 00:14:57.292 "num_base_bdevs": 4, 00:14:57.292 "num_base_bdevs_discovered": 4, 00:14:57.292 "num_base_bdevs_operational": 4, 00:14:57.292 "process": { 00:14:57.292 "type": "rebuild", 00:14:57.292 "target": "spare", 00:14:57.292 "progress": { 00:14:57.292 "blocks": 10240, 00:14:57.292 "percent": 15 00:14:57.292 } 00:14:57.292 }, 00:14:57.292 "base_bdevs_list": [ 00:14:57.292 { 00:14:57.292 "name": "spare", 00:14:57.292 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:14:57.292 "is_configured": true, 00:14:57.292 "data_offset": 0, 00:14:57.292 "data_size": 65536 00:14:57.292 }, 00:14:57.292 { 00:14:57.292 "name": "BaseBdev2", 00:14:57.292 "uuid": "557dda96-12b0-5cf6-b84a-5009073e9e95", 00:14:57.292 "is_configured": true, 00:14:57.292 "data_offset": 0, 00:14:57.292 "data_size": 65536 00:14:57.292 }, 00:14:57.292 { 00:14:57.292 "name": "BaseBdev3", 00:14:57.292 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:14:57.292 "is_configured": true, 00:14:57.292 "data_offset": 0, 00:14:57.292 "data_size": 65536 00:14:57.292 }, 00:14:57.292 { 00:14:57.292 "name": "BaseBdev4", 00:14:57.292 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:14:57.292 "is_configured": true, 00:14:57.292 "data_offset": 0, 00:14:57.292 "data_size": 65536 00:14:57.292 } 00:14:57.292 ] 00:14:57.292 }' 00:14:57.292 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.552 [2024-11-26 19:01:48.753700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:57.552 112.00 IOPS, 336.00 MiB/s [2024-11-26T19:01:48.919Z] [2024-11-26 19:01:48.901450] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:57.552 [2024-11-26 19:01:48.901667] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:57.552 19:01:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.812 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.812 "name": "raid_bdev1", 00:14:57.812 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:14:57.812 "strip_size_kb": 0, 00:14:57.812 "state": "online", 00:14:57.812 "raid_level": "raid1", 00:14:57.812 "superblock": false, 00:14:57.812 "num_base_bdevs": 4, 00:14:57.812 "num_base_bdevs_discovered": 3, 00:14:57.812 "num_base_bdevs_operational": 3, 00:14:57.812 "process": { 00:14:57.812 "type": "rebuild", 00:14:57.812 "target": "spare", 00:14:57.812 "progress": { 00:14:57.812 "blocks": 12288, 00:14:57.812 "percent": 18 00:14:57.812 } 00:14:57.812 }, 00:14:57.812 "base_bdevs_list": [ 00:14:57.812 { 00:14:57.812 "name": "spare", 00:14:57.812 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:14:57.812 "is_configured": true, 00:14:57.812 "data_offset": 0, 00:14:57.812 "data_size": 65536 00:14:57.812 }, 00:14:57.812 { 00:14:57.812 "name": null, 00:14:57.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.813 "is_configured": false, 00:14:57.813 "data_offset": 0, 00:14:57.813 "data_size": 65536 00:14:57.813 }, 00:14:57.813 { 00:14:57.813 "name": "BaseBdev3", 00:14:57.813 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:14:57.813 "is_configured": true, 00:14:57.813 "data_offset": 0, 00:14:57.813 "data_size": 65536 00:14:57.813 }, 00:14:57.813 { 00:14:57.813 "name": "BaseBdev4", 00:14:57.813 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:14:57.813 "is_configured": true, 00:14:57.813 "data_offset": 0, 00:14:57.813 "data_size": 65536 00:14:57.813 } 00:14:57.813 ] 00:14:57.813 }' 00:14:57.813 19:01:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.813 [2024-11-26 19:01:49.017910] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:57.813 [2024-11-26 19:01:49.018732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=528 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.813 [2024-11-26 19:01:49.151067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.813 "name": "raid_bdev1", 00:14:57.813 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:14:57.813 "strip_size_kb": 0, 00:14:57.813 "state": "online", 00:14:57.813 "raid_level": "raid1", 00:14:57.813 "superblock": false, 00:14:57.813 "num_base_bdevs": 4, 00:14:57.813 "num_base_bdevs_discovered": 3, 00:14:57.813 "num_base_bdevs_operational": 3, 00:14:57.813 "process": { 00:14:57.813 "type": "rebuild", 00:14:57.813 "target": "spare", 00:14:57.813 "progress": { 00:14:57.813 "blocks": 14336, 00:14:57.813 "percent": 21 00:14:57.813 } 00:14:57.813 }, 00:14:57.813 "base_bdevs_list": [ 00:14:57.813 { 00:14:57.813 "name": "spare", 00:14:57.813 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:14:57.813 "is_configured": true, 00:14:57.813 "data_offset": 0, 00:14:57.813 "data_size": 65536 00:14:57.813 }, 00:14:57.813 { 00:14:57.813 "name": null, 00:14:57.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.813 "is_configured": false, 00:14:57.813 "data_offset": 0, 00:14:57.813 "data_size": 65536 00:14:57.813 }, 00:14:57.813 { 00:14:57.813 "name": "BaseBdev3", 00:14:57.813 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:14:57.813 "is_configured": true, 00:14:57.813 "data_offset": 0, 00:14:57.813 "data_size": 65536 00:14:57.813 }, 00:14:57.813 { 00:14:57.813 "name": "BaseBdev4", 00:14:57.813 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:14:57.813 "is_configured": true, 00:14:57.813 "data_offset": 0, 00:14:57.813 "data_size": 65536 00:14:57.813 } 00:14:57.813 ] 00:14:57.813 }' 00:14:57.813 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.813 [2024-11-26 19:01:49.159469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:58.071 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.071 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.071 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.071 19:01:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.330 [2024-11-26 19:01:49.596986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:58.589 103.60 IOPS, 310.80 MiB/s [2024-11-26T19:01:49.956Z] [2024-11-26 19:01:49.911605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:58.589 [2024-11-26 19:01:49.912246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:58.848 [2024-11-26 19:01:50.031406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:58.848 [2024-11-26 19:01:50.032046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.108 "name": "raid_bdev1", 00:14:59.108 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:14:59.108 "strip_size_kb": 0, 00:14:59.108 "state": "online", 00:14:59.108 "raid_level": "raid1", 00:14:59.108 "superblock": false, 00:14:59.108 "num_base_bdevs": 4, 00:14:59.108 "num_base_bdevs_discovered": 3, 00:14:59.108 "num_base_bdevs_operational": 3, 00:14:59.108 "process": { 00:14:59.108 "type": "rebuild", 00:14:59.108 "target": "spare", 00:14:59.108 "progress": { 00:14:59.108 "blocks": 30720, 00:14:59.108 "percent": 46 00:14:59.108 } 00:14:59.108 }, 00:14:59.108 "base_bdevs_list": [ 00:14:59.108 { 00:14:59.108 "name": "spare", 00:14:59.108 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:14:59.108 "is_configured": true, 00:14:59.108 "data_offset": 0, 00:14:59.108 "data_size": 65536 00:14:59.108 }, 00:14:59.108 { 00:14:59.108 "name": null, 00:14:59.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.108 "is_configured": false, 00:14:59.108 "data_offset": 0, 00:14:59.108 "data_size": 65536 00:14:59.108 }, 00:14:59.108 { 00:14:59.108 "name": "BaseBdev3", 00:14:59.108 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:14:59.108 "is_configured": true, 00:14:59.108 "data_offset": 0, 00:14:59.108 "data_size": 65536 00:14:59.108 }, 00:14:59.108 { 00:14:59.108 "name": "BaseBdev4", 00:14:59.108 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:14:59.108 "is_configured": true, 00:14:59.108 "data_offset": 0, 00:14:59.108 "data_size": 65536 00:14:59.108 } 00:14:59.108 ] 00:14:59.108 }' 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.108 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.109 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.109 19:01:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.367 [2024-11-26 19:01:50.534197] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:59.627 [2024-11-26 19:01:50.875053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:59.627 [2024-11-26 19:01:50.875812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:00.195 93.17 IOPS, 279.50 MiB/s [2024-11-26T19:01:51.562Z] 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.195 "name": "raid_bdev1", 00:15:00.195 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:15:00.195 "strip_size_kb": 0, 00:15:00.195 "state": "online", 00:15:00.195 "raid_level": "raid1", 00:15:00.195 "superblock": false, 00:15:00.195 "num_base_bdevs": 4, 00:15:00.195 "num_base_bdevs_discovered": 3, 00:15:00.195 "num_base_bdevs_operational": 3, 00:15:00.195 "process": { 00:15:00.195 "type": "rebuild", 00:15:00.195 "target": "spare", 00:15:00.195 "progress": { 00:15:00.195 "blocks": 47104, 00:15:00.195 "percent": 71 00:15:00.195 } 00:15:00.195 }, 00:15:00.195 "base_bdevs_list": [ 00:15:00.195 { 00:15:00.195 "name": "spare", 00:15:00.195 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:15:00.195 "is_configured": true, 00:15:00.195 "data_offset": 0, 00:15:00.195 "data_size": 65536 00:15:00.195 }, 00:15:00.195 { 00:15:00.195 "name": null, 00:15:00.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.195 "is_configured": false, 00:15:00.195 "data_offset": 0, 00:15:00.195 "data_size": 65536 00:15:00.195 }, 00:15:00.195 { 00:15:00.195 "name": "BaseBdev3", 00:15:00.195 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:15:00.195 "is_configured": true, 00:15:00.195 "data_offset": 0, 00:15:00.195 "data_size": 65536 00:15:00.195 }, 00:15:00.195 { 00:15:00.195 "name": "BaseBdev4", 00:15:00.195 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:15:00.195 "is_configured": true, 00:15:00.195 "data_offset": 0, 00:15:00.195 "data_size": 65536 00:15:00.195 } 00:15:00.195 ] 00:15:00.195 }' 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.195 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.466 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.466 19:01:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.466 [2024-11-26 19:01:51.674340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:00.733 86.14 IOPS, 258.43 MiB/s [2024-11-26T19:01:52.100Z] [2024-11-26 19:01:51.899588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:00.992 [2024-11-26 19:01:52.123382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:01.251 [2024-11-26 19:01:52.577572] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.251 19:01:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.511 19:01:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.511 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.511 "name": "raid_bdev1", 00:15:01.511 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:15:01.511 "strip_size_kb": 0, 00:15:01.511 "state": "online", 00:15:01.511 "raid_level": "raid1", 00:15:01.511 "superblock": false, 00:15:01.511 "num_base_bdevs": 4, 00:15:01.511 "num_base_bdevs_discovered": 3, 00:15:01.511 "num_base_bdevs_operational": 3, 00:15:01.511 "process": { 00:15:01.511 "type": "rebuild", 00:15:01.511 "target": "spare", 00:15:01.511 "progress": { 00:15:01.511 "blocks": 65536, 00:15:01.511 "percent": 100 00:15:01.511 } 00:15:01.511 }, 00:15:01.511 "base_bdevs_list": [ 00:15:01.511 { 00:15:01.511 "name": "spare", 00:15:01.511 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:15:01.511 "is_configured": true, 00:15:01.511 "data_offset": 0, 00:15:01.511 "data_size": 65536 00:15:01.511 }, 00:15:01.511 { 00:15:01.511 "name": null, 00:15:01.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.511 "is_configured": false, 00:15:01.511 "data_offset": 0, 00:15:01.511 "data_size": 65536 00:15:01.511 }, 00:15:01.511 { 00:15:01.511 "name": "BaseBdev3", 00:15:01.511 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:15:01.511 "is_configured": true, 00:15:01.511 "data_offset": 0, 00:15:01.511 "data_size": 65536 00:15:01.511 }, 00:15:01.511 { 00:15:01.511 "name": "BaseBdev4", 00:15:01.511 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:15:01.511 "is_configured": true, 00:15:01.511 "data_offset": 0, 00:15:01.511 "data_size": 65536 00:15:01.511 } 00:15:01.511 ] 00:15:01.511 }' 00:15:01.511 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.511 [2024-11-26 19:01:52.677600] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:01.511 [2024-11-26 19:01:52.680039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.511 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.511 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.511 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.511 19:01:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.708 80.38 IOPS, 241.12 MiB/s [2024-11-26T19:01:54.075Z] 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.708 "name": "raid_bdev1", 00:15:02.708 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:15:02.708 "strip_size_kb": 0, 00:15:02.708 "state": "online", 00:15:02.708 "raid_level": "raid1", 00:15:02.708 "superblock": false, 00:15:02.708 "num_base_bdevs": 4, 00:15:02.708 "num_base_bdevs_discovered": 3, 00:15:02.708 "num_base_bdevs_operational": 3, 00:15:02.708 "base_bdevs_list": [ 00:15:02.708 { 00:15:02.708 "name": "spare", 00:15:02.708 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:15:02.708 "is_configured": true, 00:15:02.708 "data_offset": 0, 00:15:02.708 "data_size": 65536 00:15:02.708 }, 00:15:02.708 { 00:15:02.708 "name": null, 00:15:02.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.708 "is_configured": false, 00:15:02.708 "data_offset": 0, 00:15:02.708 "data_size": 65536 00:15:02.708 }, 00:15:02.708 { 00:15:02.708 "name": "BaseBdev3", 00:15:02.708 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:15:02.708 "is_configured": true, 00:15:02.708 "data_offset": 0, 00:15:02.708 "data_size": 65536 00:15:02.708 }, 00:15:02.708 { 00:15:02.708 "name": "BaseBdev4", 00:15:02.708 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:15:02.708 "is_configured": true, 00:15:02.708 "data_offset": 0, 00:15:02.708 "data_size": 65536 00:15:02.708 } 00:15:02.708 ] 00:15:02.708 }' 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.708 76.33 IOPS, 229.00 MiB/s [2024-11-26T19:01:54.075Z] 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.708 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.708 "name": "raid_bdev1", 00:15:02.708 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:15:02.709 "strip_size_kb": 0, 00:15:02.709 "state": "online", 00:15:02.709 "raid_level": "raid1", 00:15:02.709 "superblock": false, 00:15:02.709 "num_base_bdevs": 4, 00:15:02.709 "num_base_bdevs_discovered": 3, 00:15:02.709 "num_base_bdevs_operational": 3, 00:15:02.709 "base_bdevs_list": [ 00:15:02.709 { 00:15:02.709 "name": "spare", 00:15:02.709 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:15:02.709 "is_configured": true, 00:15:02.709 "data_offset": 0, 00:15:02.709 "data_size": 65536 00:15:02.709 }, 00:15:02.709 { 00:15:02.709 "name": null, 00:15:02.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.709 "is_configured": false, 00:15:02.709 "data_offset": 0, 00:15:02.709 "data_size": 65536 00:15:02.709 }, 00:15:02.709 { 00:15:02.709 "name": "BaseBdev3", 00:15:02.709 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:15:02.709 "is_configured": true, 00:15:02.709 "data_offset": 0, 00:15:02.709 "data_size": 65536 00:15:02.709 }, 00:15:02.709 { 00:15:02.709 "name": "BaseBdev4", 00:15:02.709 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:15:02.709 "is_configured": true, 00:15:02.709 "data_offset": 0, 00:15:02.709 "data_size": 65536 00:15:02.709 } 00:15:02.709 ] 00:15:02.709 }' 00:15:02.709 19:01:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.709 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.709 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.967 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.967 "name": "raid_bdev1", 00:15:02.967 "uuid": "c66b23dd-92e0-4c94-ac34-aa985c9a5825", 00:15:02.967 "strip_size_kb": 0, 00:15:02.967 "state": "online", 00:15:02.967 "raid_level": "raid1", 00:15:02.967 "superblock": false, 00:15:02.967 "num_base_bdevs": 4, 00:15:02.967 "num_base_bdevs_discovered": 3, 00:15:02.967 "num_base_bdevs_operational": 3, 00:15:02.967 "base_bdevs_list": [ 00:15:02.967 { 00:15:02.967 "name": "spare", 00:15:02.967 "uuid": "3a3ed1d7-1336-5ec4-bf04-c9b9ffc253ea", 00:15:02.967 "is_configured": true, 00:15:02.967 "data_offset": 0, 00:15:02.967 "data_size": 65536 00:15:02.967 }, 00:15:02.967 { 00:15:02.967 "name": null, 00:15:02.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.967 "is_configured": false, 00:15:02.968 "data_offset": 0, 00:15:02.968 "data_size": 65536 00:15:02.968 }, 00:15:02.968 { 00:15:02.968 "name": "BaseBdev3", 00:15:02.968 "uuid": "96ee0b1e-2ef9-5d0c-a67f-cd9dce9cba52", 00:15:02.968 "is_configured": true, 00:15:02.968 "data_offset": 0, 00:15:02.968 "data_size": 65536 00:15:02.968 }, 00:15:02.968 { 00:15:02.968 "name": "BaseBdev4", 00:15:02.968 "uuid": "86d6119c-7b4c-5945-bd8a-9e54a00a0c8a", 00:15:02.968 "is_configured": true, 00:15:02.968 "data_offset": 0, 00:15:02.968 "data_size": 65536 00:15:02.968 } 00:15:02.968 ] 00:15:02.968 }' 00:15:02.968 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.968 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.536 [2024-11-26 19:01:54.619804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.536 [2024-11-26 19:01:54.619847] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.536 00:15:03.536 Latency(us) 00:15:03.536 [2024-11-26T19:01:54.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.536 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:03.536 raid_bdev1 : 9.86 72.80 218.41 0.00 0.00 19471.36 281.13 114866.73 00:15:03.536 [2024-11-26T19:01:54.903Z] =================================================================================================================== 00:15:03.536 [2024-11-26T19:01:54.903Z] Total : 72.80 218.41 0.00 0.00 19471.36 281.13 114866.73 00:15:03.536 [2024-11-26 19:01:54.740679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.536 { 00:15:03.536 "results": [ 00:15:03.536 { 00:15:03.536 "job": "raid_bdev1", 00:15:03.536 "core_mask": "0x1", 00:15:03.536 "workload": "randrw", 00:15:03.536 "percentage": 50, 00:15:03.536 "status": "finished", 00:15:03.536 "queue_depth": 2, 00:15:03.536 "io_size": 3145728, 00:15:03.536 "runtime": 9.862037, 00:15:03.536 "iops": 72.80443178219672, 00:15:03.536 "mibps": 218.41329534659013, 00:15:03.536 "io_failed": 0, 00:15:03.536 "io_timeout": 0, 00:15:03.536 "avg_latency_us": 19471.362593061534, 00:15:03.536 "min_latency_us": 281.13454545454545, 00:15:03.536 "max_latency_us": 114866.73454545454 00:15:03.536 } 00:15:03.536 ], 00:15:03.536 "core_count": 1 00:15:03.536 } 00:15:03.536 [2024-11-26 19:01:54.740978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.536 [2024-11-26 19:01:54.741164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.536 [2024-11-26 19:01:54.741187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.536 19:01:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:03.796 /dev/nbd0 00:15:03.796 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.055 1+0 records in 00:15:04.055 1+0 records out 00:15:04.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424575 s, 9.6 MB/s 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.055 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:04.315 /dev/nbd1 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.315 1+0 records in 00:15:04.315 1+0 records out 00:15:04.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328377 s, 12.5 MB/s 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.315 19:01:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.883 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:05.143 /dev/nbd1 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.143 1+0 records in 00:15:05.143 1+0 records out 00:15:05.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00086362 s, 4.7 MB/s 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.143 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.710 19:01:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79100 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79100 ']' 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79100 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79100 00:15:05.969 killing process with pid 79100 00:15:05.969 Received shutdown signal, test time was about 12.291920 seconds 00:15:05.969 00:15:05.969 Latency(us) 00:15:05.969 [2024-11-26T19:01:57.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.969 [2024-11-26T19:01:57.336Z] =================================================================================================================== 00:15:05.969 [2024-11-26T19:01:57.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79100' 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79100 00:15:05.969 19:01:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79100 00:15:05.969 [2024-11-26 19:01:57.149580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.229 [2024-11-26 19:01:57.540304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.607 19:01:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:07.607 00:15:07.607 real 0m16.064s 00:15:07.607 user 0m20.963s 00:15:07.607 sys 0m1.895s 00:15:07.607 19:01:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.607 ************************************ 00:15:07.607 END TEST raid_rebuild_test_io 00:15:07.607 ************************************ 00:15:07.607 19:01:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.607 19:01:58 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:07.607 19:01:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:07.607 19:01:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.607 19:01:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.607 ************************************ 00:15:07.607 START TEST raid_rebuild_test_sb_io 00:15:07.607 ************************************ 00:15:07.607 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:07.607 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79539 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79539 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79539 ']' 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.608 19:01:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.608 [2024-11-26 19:01:58.907533] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:15:07.608 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:07.608 Zero copy mechanism will not be used. 00:15:07.608 [2024-11-26 19:01:58.908002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79539 ] 00:15:07.867 [2024-11-26 19:01:59.099487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.124 [2024-11-26 19:01:59.256184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.382 [2024-11-26 19:01:59.495213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.382 [2024-11-26 19:01:59.495289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.642 BaseBdev1_malloc 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.642 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.642 [2024-11-26 19:01:59.972299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:08.642 [2024-11-26 19:01:59.972555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.642 [2024-11-26 19:01:59.972599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.642 [2024-11-26 19:01:59.972621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.642 [2024-11-26 19:01:59.975560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.642 [2024-11-26 19:01:59.975788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.643 BaseBdev1 00:15:08.643 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.643 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.643 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:08.643 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.643 19:01:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.902 BaseBdev2_malloc 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.902 [2024-11-26 19:02:00.029236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:08.902 [2024-11-26 19:02:00.029360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.902 [2024-11-26 19:02:00.029393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:08.902 [2024-11-26 19:02:00.029411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.902 [2024-11-26 19:02:00.032322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.902 [2024-11-26 19:02:00.032548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:08.902 BaseBdev2 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.902 BaseBdev3_malloc 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.902 [2024-11-26 19:02:00.091496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:08.902 [2024-11-26 19:02:00.091783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.902 [2024-11-26 19:02:00.091864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:08.902 [2024-11-26 19:02:00.092147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.902 [2024-11-26 19:02:00.095121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.902 [2024-11-26 19:02:00.095351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:08.902 BaseBdev3 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.902 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.903 BaseBdev4_malloc 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.903 [2024-11-26 19:02:00.149719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:08.903 [2024-11-26 19:02:00.149994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.903 [2024-11-26 19:02:00.150072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:08.903 [2024-11-26 19:02:00.150331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.903 [2024-11-26 19:02:00.153331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.903 [2024-11-26 19:02:00.153532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:08.903 BaseBdev4 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.903 spare_malloc 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.903 spare_delay 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.903 [2024-11-26 19:02:00.218959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.903 [2024-11-26 19:02:00.219031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.903 [2024-11-26 19:02:00.219060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:08.903 [2024-11-26 19:02:00.219078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.903 [2024-11-26 19:02:00.222154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.903 [2024-11-26 19:02:00.222209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.903 spare 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.903 [2024-11-26 19:02:00.227108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.903 [2024-11-26 19:02:00.229833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.903 [2024-11-26 19:02:00.230079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.903 [2024-11-26 19:02:00.230315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:08.903 [2024-11-26 19:02:00.230698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:08.903 [2024-11-26 19:02:00.230867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:08.903 [2024-11-26 19:02:00.231351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:08.903 [2024-11-26 19:02:00.231761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:08.903 [2024-11-26 19:02:00.231910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:08.903 [2024-11-26 19:02:00.232302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.903 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.161 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.161 "name": "raid_bdev1", 00:15:09.161 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:09.161 "strip_size_kb": 0, 00:15:09.161 "state": "online", 00:15:09.161 "raid_level": "raid1", 00:15:09.161 "superblock": true, 00:15:09.161 "num_base_bdevs": 4, 00:15:09.161 "num_base_bdevs_discovered": 4, 00:15:09.161 "num_base_bdevs_operational": 4, 00:15:09.161 "base_bdevs_list": [ 00:15:09.161 { 00:15:09.161 "name": "BaseBdev1", 00:15:09.161 "uuid": "0fe48688-e00c-58d8-8702-e1da6fef6787", 00:15:09.161 "is_configured": true, 00:15:09.161 "data_offset": 2048, 00:15:09.161 "data_size": 63488 00:15:09.161 }, 00:15:09.161 { 00:15:09.161 "name": "BaseBdev2", 00:15:09.161 "uuid": "613fa1f2-6180-5b6f-a3a7-e036339980d4", 00:15:09.161 "is_configured": true, 00:15:09.161 "data_offset": 2048, 00:15:09.161 "data_size": 63488 00:15:09.161 }, 00:15:09.161 { 00:15:09.161 "name": "BaseBdev3", 00:15:09.161 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:09.161 "is_configured": true, 00:15:09.161 "data_offset": 2048, 00:15:09.161 "data_size": 63488 00:15:09.161 }, 00:15:09.161 { 00:15:09.161 "name": "BaseBdev4", 00:15:09.161 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:09.161 "is_configured": true, 00:15:09.161 "data_offset": 2048, 00:15:09.161 "data_size": 63488 00:15:09.161 } 00:15:09.161 ] 00:15:09.161 }' 00:15:09.161 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.161 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.420 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.420 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:09.420 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.420 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.420 [2024-11-26 19:02:00.757077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.420 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.679 [2024-11-26 19:02:00.864506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.679 "name": "raid_bdev1", 00:15:09.679 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:09.679 "strip_size_kb": 0, 00:15:09.679 "state": "online", 00:15:09.679 "raid_level": "raid1", 00:15:09.679 "superblock": true, 00:15:09.679 "num_base_bdevs": 4, 00:15:09.679 "num_base_bdevs_discovered": 3, 00:15:09.679 "num_base_bdevs_operational": 3, 00:15:09.679 "base_bdevs_list": [ 00:15:09.679 { 00:15:09.679 "name": null, 00:15:09.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.679 "is_configured": false, 00:15:09.679 "data_offset": 0, 00:15:09.679 "data_size": 63488 00:15:09.679 }, 00:15:09.679 { 00:15:09.679 "name": "BaseBdev2", 00:15:09.679 "uuid": "613fa1f2-6180-5b6f-a3a7-e036339980d4", 00:15:09.679 "is_configured": true, 00:15:09.679 "data_offset": 2048, 00:15:09.679 "data_size": 63488 00:15:09.679 }, 00:15:09.679 { 00:15:09.679 "name": "BaseBdev3", 00:15:09.679 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:09.679 "is_configured": true, 00:15:09.679 "data_offset": 2048, 00:15:09.679 "data_size": 63488 00:15:09.679 }, 00:15:09.679 { 00:15:09.679 "name": "BaseBdev4", 00:15:09.679 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:09.679 "is_configured": true, 00:15:09.679 "data_offset": 2048, 00:15:09.679 "data_size": 63488 00:15:09.679 } 00:15:09.679 ] 00:15:09.679 }' 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.679 19:02:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.679 [2024-11-26 19:02:00.992968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:09.679 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.679 Zero copy mechanism will not be used. 00:15:09.679 Running I/O for 60 seconds... 00:15:10.247 19:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.247 19:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.247 19:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.247 [2024-11-26 19:02:01.397802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.247 19:02:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.247 19:02:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:10.247 [2024-11-26 19:02:01.462330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:10.247 [2024-11-26 19:02:01.465037] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:10.247 [2024-11-26 19:02:01.583399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:10.247 [2024-11-26 19:02:01.585280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:10.505 [2024-11-26 19:02:01.797980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:10.506 [2024-11-26 19:02:01.798830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:10.764 118.00 IOPS, 354.00 MiB/s [2024-11-26T19:02:02.131Z] [2024-11-26 19:02:02.126386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:11.023 [2024-11-26 19:02:02.341096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:11.023 [2024-11-26 19:02:02.341509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.282 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.282 "name": "raid_bdev1", 00:15:11.282 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:11.282 "strip_size_kb": 0, 00:15:11.282 "state": "online", 00:15:11.282 "raid_level": "raid1", 00:15:11.282 "superblock": true, 00:15:11.282 "num_base_bdevs": 4, 00:15:11.282 "num_base_bdevs_discovered": 4, 00:15:11.282 "num_base_bdevs_operational": 4, 00:15:11.282 "process": { 00:15:11.282 "type": "rebuild", 00:15:11.282 "target": "spare", 00:15:11.282 "progress": { 00:15:11.282 "blocks": 10240, 00:15:11.282 "percent": 16 00:15:11.282 } 00:15:11.282 }, 00:15:11.282 "base_bdevs_list": [ 00:15:11.282 { 00:15:11.282 "name": "spare", 00:15:11.282 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:11.282 "is_configured": true, 00:15:11.282 "data_offset": 2048, 00:15:11.282 "data_size": 63488 00:15:11.282 }, 00:15:11.282 { 00:15:11.282 "name": "BaseBdev2", 00:15:11.282 "uuid": "613fa1f2-6180-5b6f-a3a7-e036339980d4", 00:15:11.282 "is_configured": true, 00:15:11.282 "data_offset": 2048, 00:15:11.282 "data_size": 63488 00:15:11.282 }, 00:15:11.282 { 00:15:11.282 "name": "BaseBdev3", 00:15:11.282 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:11.283 "is_configured": true, 00:15:11.283 "data_offset": 2048, 00:15:11.283 "data_size": 63488 00:15:11.283 }, 00:15:11.283 { 00:15:11.283 "name": "BaseBdev4", 00:15:11.283 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:11.283 "is_configured": true, 00:15:11.283 "data_offset": 2048, 00:15:11.283 "data_size": 63488 00:15:11.283 } 00:15:11.283 ] 00:15:11.283 }' 00:15:11.283 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.283 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.283 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.283 [2024-11-26 19:02:02.613565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:11.283 [2024-11-26 19:02:02.614102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:11.283 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.283 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:11.283 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.283 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.283 [2024-11-26 19:02:02.628332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.542 [2024-11-26 19:02:02.729388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:11.542 [2024-11-26 19:02:02.846373] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.542 [2024-11-26 19:02:02.861481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.542 [2024-11-26 19:02:02.861565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.542 [2024-11-26 19:02:02.861589] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.542 [2024-11-26 19:02:02.903762] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.801 "name": "raid_bdev1", 00:15:11.801 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:11.801 "strip_size_kb": 0, 00:15:11.801 "state": "online", 00:15:11.801 "raid_level": "raid1", 00:15:11.801 "superblock": true, 00:15:11.801 "num_base_bdevs": 4, 00:15:11.801 "num_base_bdevs_discovered": 3, 00:15:11.801 "num_base_bdevs_operational": 3, 00:15:11.801 "base_bdevs_list": [ 00:15:11.801 { 00:15:11.801 "name": null, 00:15:11.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.801 "is_configured": false, 00:15:11.801 "data_offset": 0, 00:15:11.801 "data_size": 63488 00:15:11.801 }, 00:15:11.801 { 00:15:11.801 "name": "BaseBdev2", 00:15:11.801 "uuid": "613fa1f2-6180-5b6f-a3a7-e036339980d4", 00:15:11.801 "is_configured": true, 00:15:11.801 "data_offset": 2048, 00:15:11.801 "data_size": 63488 00:15:11.801 }, 00:15:11.801 { 00:15:11.801 "name": "BaseBdev3", 00:15:11.801 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:11.801 "is_configured": true, 00:15:11.801 "data_offset": 2048, 00:15:11.801 "data_size": 63488 00:15:11.801 }, 00:15:11.801 { 00:15:11.801 "name": "BaseBdev4", 00:15:11.801 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:11.801 "is_configured": true, 00:15:11.801 "data_offset": 2048, 00:15:11.801 "data_size": 63488 00:15:11.801 } 00:15:11.801 ] 00:15:11.801 }' 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.801 19:02:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.368 99.00 IOPS, 297.00 MiB/s [2024-11-26T19:02:03.735Z] 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.368 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.368 "name": "raid_bdev1", 00:15:12.368 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:12.368 "strip_size_kb": 0, 00:15:12.368 "state": "online", 00:15:12.368 "raid_level": "raid1", 00:15:12.368 "superblock": true, 00:15:12.368 "num_base_bdevs": 4, 00:15:12.368 "num_base_bdevs_discovered": 3, 00:15:12.368 "num_base_bdevs_operational": 3, 00:15:12.368 "base_bdevs_list": [ 00:15:12.368 { 00:15:12.368 "name": null, 00:15:12.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.369 "is_configured": false, 00:15:12.369 "data_offset": 0, 00:15:12.369 "data_size": 63488 00:15:12.369 }, 00:15:12.369 { 00:15:12.369 "name": "BaseBdev2", 00:15:12.369 "uuid": "613fa1f2-6180-5b6f-a3a7-e036339980d4", 00:15:12.369 "is_configured": true, 00:15:12.369 "data_offset": 2048, 00:15:12.369 "data_size": 63488 00:15:12.369 }, 00:15:12.369 { 00:15:12.369 "name": "BaseBdev3", 00:15:12.369 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:12.369 "is_configured": true, 00:15:12.369 "data_offset": 2048, 00:15:12.369 "data_size": 63488 00:15:12.369 }, 00:15:12.369 { 00:15:12.369 "name": "BaseBdev4", 00:15:12.369 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:12.369 "is_configured": true, 00:15:12.369 "data_offset": 2048, 00:15:12.369 "data_size": 63488 00:15:12.369 } 00:15:12.369 ] 00:15:12.369 }' 00:15:12.369 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.369 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.369 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.369 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.369 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.369 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.369 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.369 [2024-11-26 19:02:03.646830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.369 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.369 19:02:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:12.369 [2024-11-26 19:02:03.716127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:12.369 [2024-11-26 19:02:03.718853] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.627 [2024-11-26 19:02:03.828994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:12.627 [2024-11-26 19:02:03.829648] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:12.886 133.67 IOPS, 401.00 MiB/s [2024-11-26T19:02:04.253Z] [2024-11-26 19:02:04.033166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:12.886 [2024-11-26 19:02:04.033473] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:13.144 [2024-11-26 19:02:04.287202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:13.404 [2024-11-26 19:02:04.508739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:13.404 [2024-11-26 19:02:04.509890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.404 "name": "raid_bdev1", 00:15:13.404 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:13.404 "strip_size_kb": 0, 00:15:13.404 "state": "online", 00:15:13.404 "raid_level": "raid1", 00:15:13.404 "superblock": true, 00:15:13.404 "num_base_bdevs": 4, 00:15:13.404 "num_base_bdevs_discovered": 4, 00:15:13.404 "num_base_bdevs_operational": 4, 00:15:13.404 "process": { 00:15:13.404 "type": "rebuild", 00:15:13.404 "target": "spare", 00:15:13.404 "progress": { 00:15:13.404 "blocks": 10240, 00:15:13.404 "percent": 16 00:15:13.404 } 00:15:13.404 }, 00:15:13.404 "base_bdevs_list": [ 00:15:13.404 { 00:15:13.404 "name": "spare", 00:15:13.404 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:13.404 "is_configured": true, 00:15:13.404 "data_offset": 2048, 00:15:13.404 "data_size": 63488 00:15:13.404 }, 00:15:13.404 { 00:15:13.404 "name": "BaseBdev2", 00:15:13.404 "uuid": "613fa1f2-6180-5b6f-a3a7-e036339980d4", 00:15:13.404 "is_configured": true, 00:15:13.404 "data_offset": 2048, 00:15:13.404 "data_size": 63488 00:15:13.404 }, 00:15:13.404 { 00:15:13.404 "name": "BaseBdev3", 00:15:13.404 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:13.404 "is_configured": true, 00:15:13.404 "data_offset": 2048, 00:15:13.404 "data_size": 63488 00:15:13.404 }, 00:15:13.404 { 00:15:13.404 "name": "BaseBdev4", 00:15:13.404 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:13.404 "is_configured": true, 00:15:13.404 "data_offset": 2048, 00:15:13.404 "data_size": 63488 00:15:13.404 } 00:15:13.404 ] 00:15:13.404 }' 00:15:13.404 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:13.663 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.663 19:02:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.663 [2024-11-26 19:02:04.896663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.663 117.50 IOPS, 352.50 MiB/s [2024-11-26T19:02:05.030Z] [2024-11-26 19:02:05.008578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:13.922 [2024-11-26 19:02:05.212799] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:13.922 [2024-11-26 19:02:05.212851] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.922 "name": "raid_bdev1", 00:15:13.922 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:13.922 "strip_size_kb": 0, 00:15:13.922 "state": "online", 00:15:13.922 "raid_level": "raid1", 00:15:13.922 "superblock": true, 00:15:13.922 "num_base_bdevs": 4, 00:15:13.922 "num_base_bdevs_discovered": 3, 00:15:13.922 "num_base_bdevs_operational": 3, 00:15:13.922 "process": { 00:15:13.922 "type": "rebuild", 00:15:13.922 "target": "spare", 00:15:13.922 "progress": { 00:15:13.922 "blocks": 16384, 00:15:13.922 "percent": 25 00:15:13.922 } 00:15:13.922 }, 00:15:13.922 "base_bdevs_list": [ 00:15:13.922 { 00:15:13.922 "name": "spare", 00:15:13.922 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:13.922 "is_configured": true, 00:15:13.922 "data_offset": 2048, 00:15:13.922 "data_size": 63488 00:15:13.922 }, 00:15:13.922 { 00:15:13.922 "name": null, 00:15:13.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.922 "is_configured": false, 00:15:13.922 "data_offset": 0, 00:15:13.922 "data_size": 63488 00:15:13.922 }, 00:15:13.922 { 00:15:13.922 "name": "BaseBdev3", 00:15:13.922 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:13.922 "is_configured": true, 00:15:13.922 "data_offset": 2048, 00:15:13.922 "data_size": 63488 00:15:13.922 }, 00:15:13.922 { 00:15:13.922 "name": "BaseBdev4", 00:15:13.922 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:13.922 "is_configured": true, 00:15:13.922 "data_offset": 2048, 00:15:13.922 "data_size": 63488 00:15:13.922 } 00:15:13.922 ] 00:15:13.922 }' 00:15:13.922 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=544 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.181 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.181 "name": "raid_bdev1", 00:15:14.182 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:14.182 "strip_size_kb": 0, 00:15:14.182 "state": "online", 00:15:14.182 "raid_level": "raid1", 00:15:14.182 "superblock": true, 00:15:14.182 "num_base_bdevs": 4, 00:15:14.182 "num_base_bdevs_discovered": 3, 00:15:14.182 "num_base_bdevs_operational": 3, 00:15:14.182 "process": { 00:15:14.182 "type": "rebuild", 00:15:14.182 "target": "spare", 00:15:14.182 "progress": { 00:15:14.182 "blocks": 18432, 00:15:14.182 "percent": 29 00:15:14.182 } 00:15:14.182 }, 00:15:14.182 "base_bdevs_list": [ 00:15:14.182 { 00:15:14.182 "name": "spare", 00:15:14.182 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:14.182 "is_configured": true, 00:15:14.182 "data_offset": 2048, 00:15:14.182 "data_size": 63488 00:15:14.182 }, 00:15:14.182 { 00:15:14.182 "name": null, 00:15:14.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.182 "is_configured": false, 00:15:14.182 "data_offset": 0, 00:15:14.182 "data_size": 63488 00:15:14.182 }, 00:15:14.182 { 00:15:14.182 "name": "BaseBdev3", 00:15:14.182 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:14.182 "is_configured": true, 00:15:14.182 "data_offset": 2048, 00:15:14.182 "data_size": 63488 00:15:14.182 }, 00:15:14.182 { 00:15:14.182 "name": "BaseBdev4", 00:15:14.182 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:14.182 "is_configured": true, 00:15:14.182 "data_offset": 2048, 00:15:14.182 "data_size": 63488 00:15:14.182 } 00:15:14.182 ] 00:15:14.182 }' 00:15:14.182 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.182 [2024-11-26 19:02:05.468696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:14.182 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.182 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.441 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.441 19:02:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.441 [2024-11-26 19:02:05.714382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:14.704 [2024-11-26 19:02:05.979133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:14.979 106.00 IOPS, 318.00 MiB/s [2024-11-26T19:02:06.346Z] [2024-11-26 19:02:06.230498] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.238 [2024-11-26 19:02:06.569399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:15.238 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.497 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.497 "name": "raid_bdev1", 00:15:15.497 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:15.497 "strip_size_kb": 0, 00:15:15.497 "state": "online", 00:15:15.497 "raid_level": "raid1", 00:15:15.497 "superblock": true, 00:15:15.497 "num_base_bdevs": 4, 00:15:15.497 "num_base_bdevs_discovered": 3, 00:15:15.497 "num_base_bdevs_operational": 3, 00:15:15.497 "process": { 00:15:15.497 "type": "rebuild", 00:15:15.497 "target": "spare", 00:15:15.497 "progress": { 00:15:15.497 "blocks": 30720, 00:15:15.497 "percent": 48 00:15:15.497 } 00:15:15.497 }, 00:15:15.497 "base_bdevs_list": [ 00:15:15.497 { 00:15:15.497 "name": "spare", 00:15:15.497 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:15.497 "is_configured": true, 00:15:15.497 "data_offset": 2048, 00:15:15.497 "data_size": 63488 00:15:15.497 }, 00:15:15.497 { 00:15:15.497 "name": null, 00:15:15.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.497 "is_configured": false, 00:15:15.497 "data_offset": 0, 00:15:15.497 "data_size": 63488 00:15:15.497 }, 00:15:15.497 { 00:15:15.497 "name": "BaseBdev3", 00:15:15.497 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:15.497 "is_configured": true, 00:15:15.497 "data_offset": 2048, 00:15:15.497 "data_size": 63488 00:15:15.497 }, 00:15:15.497 { 00:15:15.497 "name": "BaseBdev4", 00:15:15.497 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:15.497 "is_configured": true, 00:15:15.497 "data_offset": 2048, 00:15:15.498 "data_size": 63488 00:15:15.498 } 00:15:15.498 ] 00:15:15.498 }' 00:15:15.498 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.498 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.498 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.498 [2024-11-26 19:02:06.696809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:15.498 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.498 19:02:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.323 95.50 IOPS, 286.50 MiB/s [2024-11-26T19:02:07.690Z] [2024-11-26 19:02:07.384946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:16.323 [2024-11-26 19:02:07.516460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.583 [2024-11-26 19:02:07.757797] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.583 "name": "raid_bdev1", 00:15:16.583 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:16.583 "strip_size_kb": 0, 00:15:16.583 "state": "online", 00:15:16.583 "raid_level": "raid1", 00:15:16.583 "superblock": true, 00:15:16.583 "num_base_bdevs": 4, 00:15:16.583 "num_base_bdevs_discovered": 3, 00:15:16.583 "num_base_bdevs_operational": 3, 00:15:16.583 "process": { 00:15:16.583 "type": "rebuild", 00:15:16.583 "target": "spare", 00:15:16.583 "progress": { 00:15:16.583 "blocks": 49152, 00:15:16.583 "percent": 77 00:15:16.583 } 00:15:16.583 }, 00:15:16.583 "base_bdevs_list": [ 00:15:16.583 { 00:15:16.583 "name": "spare", 00:15:16.583 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:16.583 "is_configured": true, 00:15:16.583 "data_offset": 2048, 00:15:16.583 "data_size": 63488 00:15:16.583 }, 00:15:16.583 { 00:15:16.583 "name": null, 00:15:16.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.583 "is_configured": false, 00:15:16.583 "data_offset": 0, 00:15:16.583 "data_size": 63488 00:15:16.583 }, 00:15:16.583 { 00:15:16.583 "name": "BaseBdev3", 00:15:16.583 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:16.583 "is_configured": true, 00:15:16.583 "data_offset": 2048, 00:15:16.583 "data_size": 63488 00:15:16.583 }, 00:15:16.583 { 00:15:16.583 "name": "BaseBdev4", 00:15:16.583 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:16.583 "is_configured": true, 00:15:16.583 "data_offset": 2048, 00:15:16.583 "data_size": 63488 00:15:16.583 } 00:15:16.583 ] 00:15:16.583 }' 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.583 19:02:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.842 [2024-11-26 19:02:07.969021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:17.101 88.57 IOPS, 265.71 MiB/s [2024-11-26T19:02:08.468Z] [2024-11-26 19:02:08.224263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:17.101 [2024-11-26 19:02:08.225135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:17.101 [2024-11-26 19:02:08.427776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:17.359 [2024-11-26 19:02:08.665808] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:17.618 [2024-11-26 19:02:08.772805] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:17.618 [2024-11-26 19:02:08.777874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.618 "name": "raid_bdev1", 00:15:17.618 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:17.618 "strip_size_kb": 0, 00:15:17.618 "state": "online", 00:15:17.618 "raid_level": "raid1", 00:15:17.618 "superblock": true, 00:15:17.618 "num_base_bdevs": 4, 00:15:17.618 "num_base_bdevs_discovered": 3, 00:15:17.618 "num_base_bdevs_operational": 3, 00:15:17.618 "base_bdevs_list": [ 00:15:17.618 { 00:15:17.618 "name": "spare", 00:15:17.618 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:17.618 "is_configured": true, 00:15:17.618 "data_offset": 2048, 00:15:17.618 "data_size": 63488 00:15:17.618 }, 00:15:17.618 { 00:15:17.618 "name": null, 00:15:17.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.618 "is_configured": false, 00:15:17.618 "data_offset": 0, 00:15:17.618 "data_size": 63488 00:15:17.618 }, 00:15:17.618 { 00:15:17.618 "name": "BaseBdev3", 00:15:17.618 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:17.618 "is_configured": true, 00:15:17.618 "data_offset": 2048, 00:15:17.618 "data_size": 63488 00:15:17.618 }, 00:15:17.618 { 00:15:17.618 "name": "BaseBdev4", 00:15:17.618 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:17.618 "is_configured": true, 00:15:17.618 "data_offset": 2048, 00:15:17.618 "data_size": 63488 00:15:17.618 } 00:15:17.618 ] 00:15:17.618 }' 00:15:17.618 19:02:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.877 80.25 IOPS, 240.75 MiB/s [2024-11-26T19:02:09.244Z] 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.877 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.877 "name": "raid_bdev1", 00:15:17.877 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:17.878 "strip_size_kb": 0, 00:15:17.878 "state": "online", 00:15:17.878 "raid_level": "raid1", 00:15:17.878 "superblock": true, 00:15:17.878 "num_base_bdevs": 4, 00:15:17.878 "num_base_bdevs_discovered": 3, 00:15:17.878 "num_base_bdevs_operational": 3, 00:15:17.878 "base_bdevs_list": [ 00:15:17.878 { 00:15:17.878 "name": "spare", 00:15:17.878 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:17.878 "is_configured": true, 00:15:17.878 "data_offset": 2048, 00:15:17.878 "data_size": 63488 00:15:17.878 }, 00:15:17.878 { 00:15:17.878 "name": null, 00:15:17.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.878 "is_configured": false, 00:15:17.878 "data_offset": 0, 00:15:17.878 "data_size": 63488 00:15:17.878 }, 00:15:17.878 { 00:15:17.878 "name": "BaseBdev3", 00:15:17.878 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:17.878 "is_configured": true, 00:15:17.878 "data_offset": 2048, 00:15:17.878 "data_size": 63488 00:15:17.878 }, 00:15:17.878 { 00:15:17.878 "name": "BaseBdev4", 00:15:17.878 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:17.878 "is_configured": true, 00:15:17.878 "data_offset": 2048, 00:15:17.878 "data_size": 63488 00:15:17.878 } 00:15:17.878 ] 00:15:17.878 }' 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.878 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.136 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.136 "name": "raid_bdev1", 00:15:18.136 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:18.136 "strip_size_kb": 0, 00:15:18.136 "state": "online", 00:15:18.136 "raid_level": "raid1", 00:15:18.136 "superblock": true, 00:15:18.136 "num_base_bdevs": 4, 00:15:18.136 "num_base_bdevs_discovered": 3, 00:15:18.136 "num_base_bdevs_operational": 3, 00:15:18.136 "base_bdevs_list": [ 00:15:18.136 { 00:15:18.136 "name": "spare", 00:15:18.136 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:18.136 "is_configured": true, 00:15:18.136 "data_offset": 2048, 00:15:18.136 "data_size": 63488 00:15:18.136 }, 00:15:18.136 { 00:15:18.136 "name": null, 00:15:18.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.136 "is_configured": false, 00:15:18.136 "data_offset": 0, 00:15:18.136 "data_size": 63488 00:15:18.136 }, 00:15:18.136 { 00:15:18.136 "name": "BaseBdev3", 00:15:18.136 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:18.136 "is_configured": true, 00:15:18.136 "data_offset": 2048, 00:15:18.136 "data_size": 63488 00:15:18.136 }, 00:15:18.136 { 00:15:18.136 "name": "BaseBdev4", 00:15:18.136 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:18.136 "is_configured": true, 00:15:18.136 "data_offset": 2048, 00:15:18.136 "data_size": 63488 00:15:18.136 } 00:15:18.136 ] 00:15:18.136 }' 00:15:18.136 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.136 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.395 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.395 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.395 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.395 [2024-11-26 19:02:09.739521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.395 [2024-11-26 19:02:09.739574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.654 00:15:18.654 Latency(us) 00:15:18.654 [2024-11-26T19:02:10.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.654 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:18.654 raid_bdev1 : 8.77 75.93 227.78 0.00 0.00 17237.46 273.69 112483.61 00:15:18.654 [2024-11-26T19:02:10.021Z] =================================================================================================================== 00:15:18.654 [2024-11-26T19:02:10.021Z] Total : 75.93 227.78 0.00 0.00 17237.46 273.69 112483.61 00:15:18.654 [2024-11-26 19:02:09.786806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.654 [2024-11-26 19:02:09.786859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.654 [2024-11-26 19:02:09.787059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.654 [2024-11-26 19:02:09.787088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:18.654 { 00:15:18.654 "results": [ 00:15:18.654 { 00:15:18.654 "job": "raid_bdev1", 00:15:18.654 "core_mask": "0x1", 00:15:18.654 "workload": "randrw", 00:15:18.654 "percentage": 50, 00:15:18.654 "status": "finished", 00:15:18.654 "queue_depth": 2, 00:15:18.654 "io_size": 3145728, 00:15:18.654 "runtime": 8.771564, 00:15:18.654 "iops": 75.92716646655032, 00:15:18.654 "mibps": 227.78149939965095, 00:15:18.654 "io_failed": 0, 00:15:18.654 "io_timeout": 0, 00:15:18.654 "avg_latency_us": 17237.463325143322, 00:15:18.654 "min_latency_us": 273.6872727272727, 00:15:18.654 "max_latency_us": 112483.60727272727 00:15:18.654 } 00:15:18.654 ], 00:15:18.654 "core_count": 1 00:15:18.654 } 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.654 19:02:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:18.913 /dev/nbd0 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.913 1+0 records in 00:15:18.913 1+0 records out 00:15:18.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439188 s, 9.3 MB/s 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:18.913 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.914 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:19.173 /dev/nbd1 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.173 1+0 records in 00:15:19.173 1+0 records out 00:15:19.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396311 s, 10.3 MB/s 00:15:19.173 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:19.432 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.433 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:19.433 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.433 19:02:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.692 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:19.951 /dev/nbd1 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.210 1+0 records in 00:15:20.210 1+0 records out 00:15:20.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307056 s, 13.3 MB/s 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.210 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:20.470 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.471 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.471 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.471 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:20.471 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.471 19:02:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.729 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.729 [2024-11-26 19:02:12.058444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.729 [2024-11-26 19:02:12.058510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.730 [2024-11-26 19:02:12.058543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:20.730 [2024-11-26 19:02:12.058559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.730 [2024-11-26 19:02:12.061792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.730 [2024-11-26 19:02:12.061853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.730 [2024-11-26 19:02:12.062031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:20.730 [2024-11-26 19:02:12.062103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.730 [2024-11-26 19:02:12.062304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.730 [2024-11-26 19:02:12.062448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:20.730 spare 00:15:20.730 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.730 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:20.730 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.730 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.987 [2024-11-26 19:02:12.162639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:20.987 [2024-11-26 19:02:12.162667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:20.987 [2024-11-26 19:02:12.163076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:20.987 [2024-11-26 19:02:12.163313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:20.987 [2024-11-26 19:02:12.163338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:20.987 [2024-11-26 19:02:12.163572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.987 "name": "raid_bdev1", 00:15:20.987 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:20.987 "strip_size_kb": 0, 00:15:20.987 "state": "online", 00:15:20.987 "raid_level": "raid1", 00:15:20.987 "superblock": true, 00:15:20.987 "num_base_bdevs": 4, 00:15:20.987 "num_base_bdevs_discovered": 3, 00:15:20.987 "num_base_bdevs_operational": 3, 00:15:20.987 "base_bdevs_list": [ 00:15:20.987 { 00:15:20.987 "name": "spare", 00:15:20.987 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:20.987 "is_configured": true, 00:15:20.987 "data_offset": 2048, 00:15:20.987 "data_size": 63488 00:15:20.987 }, 00:15:20.987 { 00:15:20.987 "name": null, 00:15:20.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.987 "is_configured": false, 00:15:20.987 "data_offset": 2048, 00:15:20.987 "data_size": 63488 00:15:20.987 }, 00:15:20.987 { 00:15:20.987 "name": "BaseBdev3", 00:15:20.987 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:20.987 "is_configured": true, 00:15:20.987 "data_offset": 2048, 00:15:20.987 "data_size": 63488 00:15:20.987 }, 00:15:20.987 { 00:15:20.987 "name": "BaseBdev4", 00:15:20.987 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:20.987 "is_configured": true, 00:15:20.987 "data_offset": 2048, 00:15:20.987 "data_size": 63488 00:15:20.987 } 00:15:20.987 ] 00:15:20.987 }' 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.987 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.553 "name": "raid_bdev1", 00:15:21.553 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:21.553 "strip_size_kb": 0, 00:15:21.553 "state": "online", 00:15:21.553 "raid_level": "raid1", 00:15:21.553 "superblock": true, 00:15:21.553 "num_base_bdevs": 4, 00:15:21.553 "num_base_bdevs_discovered": 3, 00:15:21.553 "num_base_bdevs_operational": 3, 00:15:21.553 "base_bdevs_list": [ 00:15:21.553 { 00:15:21.553 "name": "spare", 00:15:21.553 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:21.553 "is_configured": true, 00:15:21.553 "data_offset": 2048, 00:15:21.553 "data_size": 63488 00:15:21.553 }, 00:15:21.553 { 00:15:21.553 "name": null, 00:15:21.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.553 "is_configured": false, 00:15:21.553 "data_offset": 2048, 00:15:21.553 "data_size": 63488 00:15:21.553 }, 00:15:21.553 { 00:15:21.553 "name": "BaseBdev3", 00:15:21.553 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:21.553 "is_configured": true, 00:15:21.553 "data_offset": 2048, 00:15:21.553 "data_size": 63488 00:15:21.553 }, 00:15:21.553 { 00:15:21.553 "name": "BaseBdev4", 00:15:21.553 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:21.553 "is_configured": true, 00:15:21.553 "data_offset": 2048, 00:15:21.553 "data_size": 63488 00:15:21.553 } 00:15:21.553 ] 00:15:21.553 }' 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.553 [2024-11-26 19:02:12.867151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.553 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.840 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.840 "name": "raid_bdev1", 00:15:21.840 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:21.840 "strip_size_kb": 0, 00:15:21.840 "state": "online", 00:15:21.840 "raid_level": "raid1", 00:15:21.840 "superblock": true, 00:15:21.840 "num_base_bdevs": 4, 00:15:21.840 "num_base_bdevs_discovered": 2, 00:15:21.840 "num_base_bdevs_operational": 2, 00:15:21.840 "base_bdevs_list": [ 00:15:21.840 { 00:15:21.840 "name": null, 00:15:21.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.840 "is_configured": false, 00:15:21.840 "data_offset": 0, 00:15:21.840 "data_size": 63488 00:15:21.840 }, 00:15:21.840 { 00:15:21.840 "name": null, 00:15:21.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.840 "is_configured": false, 00:15:21.840 "data_offset": 2048, 00:15:21.840 "data_size": 63488 00:15:21.840 }, 00:15:21.840 { 00:15:21.840 "name": "BaseBdev3", 00:15:21.840 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:21.840 "is_configured": true, 00:15:21.840 "data_offset": 2048, 00:15:21.840 "data_size": 63488 00:15:21.840 }, 00:15:21.840 { 00:15:21.840 "name": "BaseBdev4", 00:15:21.840 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:21.840 "is_configured": true, 00:15:21.840 "data_offset": 2048, 00:15:21.840 "data_size": 63488 00:15:21.840 } 00:15:21.840 ] 00:15:21.840 }' 00:15:21.840 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.840 19:02:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.097 19:02:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.097 19:02:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.097 19:02:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.097 [2024-11-26 19:02:13.383454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.097 [2024-11-26 19:02:13.383804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:22.097 [2024-11-26 19:02:13.383827] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:22.097 [2024-11-26 19:02:13.383910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.097 [2024-11-26 19:02:13.398033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:22.097 19:02:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.097 19:02:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:22.097 [2024-11-26 19:02:13.400851] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.471 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.471 "name": "raid_bdev1", 00:15:23.471 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:23.471 "strip_size_kb": 0, 00:15:23.471 "state": "online", 00:15:23.471 "raid_level": "raid1", 00:15:23.471 "superblock": true, 00:15:23.471 "num_base_bdevs": 4, 00:15:23.471 "num_base_bdevs_discovered": 3, 00:15:23.471 "num_base_bdevs_operational": 3, 00:15:23.471 "process": { 00:15:23.471 "type": "rebuild", 00:15:23.471 "target": "spare", 00:15:23.471 "progress": { 00:15:23.471 "blocks": 20480, 00:15:23.471 "percent": 32 00:15:23.471 } 00:15:23.471 }, 00:15:23.471 "base_bdevs_list": [ 00:15:23.471 { 00:15:23.471 "name": "spare", 00:15:23.472 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:23.472 "is_configured": true, 00:15:23.472 "data_offset": 2048, 00:15:23.472 "data_size": 63488 00:15:23.472 }, 00:15:23.472 { 00:15:23.472 "name": null, 00:15:23.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.472 "is_configured": false, 00:15:23.472 "data_offset": 2048, 00:15:23.472 "data_size": 63488 00:15:23.472 }, 00:15:23.472 { 00:15:23.472 "name": "BaseBdev3", 00:15:23.472 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:23.472 "is_configured": true, 00:15:23.472 "data_offset": 2048, 00:15:23.472 "data_size": 63488 00:15:23.472 }, 00:15:23.472 { 00:15:23.472 "name": "BaseBdev4", 00:15:23.472 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:23.472 "is_configured": true, 00:15:23.472 "data_offset": 2048, 00:15:23.472 "data_size": 63488 00:15:23.472 } 00:15:23.472 ] 00:15:23.472 }' 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.472 [2024-11-26 19:02:14.570254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.472 [2024-11-26 19:02:14.610166] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.472 [2024-11-26 19:02:14.610465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.472 [2024-11-26 19:02:14.610750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.472 [2024-11-26 19:02:14.610805] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.472 "name": "raid_bdev1", 00:15:23.472 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:23.472 "strip_size_kb": 0, 00:15:23.472 "state": "online", 00:15:23.472 "raid_level": "raid1", 00:15:23.472 "superblock": true, 00:15:23.472 "num_base_bdevs": 4, 00:15:23.472 "num_base_bdevs_discovered": 2, 00:15:23.472 "num_base_bdevs_operational": 2, 00:15:23.472 "base_bdevs_list": [ 00:15:23.472 { 00:15:23.472 "name": null, 00:15:23.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.472 "is_configured": false, 00:15:23.472 "data_offset": 0, 00:15:23.472 "data_size": 63488 00:15:23.472 }, 00:15:23.472 { 00:15:23.472 "name": null, 00:15:23.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.472 "is_configured": false, 00:15:23.472 "data_offset": 2048, 00:15:23.472 "data_size": 63488 00:15:23.472 }, 00:15:23.472 { 00:15:23.472 "name": "BaseBdev3", 00:15:23.472 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:23.472 "is_configured": true, 00:15:23.472 "data_offset": 2048, 00:15:23.472 "data_size": 63488 00:15:23.472 }, 00:15:23.472 { 00:15:23.472 "name": "BaseBdev4", 00:15:23.472 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:23.472 "is_configured": true, 00:15:23.472 "data_offset": 2048, 00:15:23.472 "data_size": 63488 00:15:23.472 } 00:15:23.472 ] 00:15:23.472 }' 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.472 19:02:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.053 19:02:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:24.053 19:02:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.053 19:02:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.053 [2024-11-26 19:02:15.158414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:24.053 [2024-11-26 19:02:15.158507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.053 [2024-11-26 19:02:15.158552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:24.053 [2024-11-26 19:02:15.158567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.053 [2024-11-26 19:02:15.159267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.053 [2024-11-26 19:02:15.159316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:24.053 [2024-11-26 19:02:15.159480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:24.053 [2024-11-26 19:02:15.159509] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:24.053 [2024-11-26 19:02:15.159532] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.053 [2024-11-26 19:02:15.159562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.053 [2024-11-26 19:02:15.174261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:24.053 spare 00:15:24.053 19:02:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.053 19:02:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:24.053 [2024-11-26 19:02:15.176995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.991 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.991 "name": "raid_bdev1", 00:15:24.991 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:24.991 "strip_size_kb": 0, 00:15:24.991 "state": "online", 00:15:24.991 "raid_level": "raid1", 00:15:24.991 "superblock": true, 00:15:24.991 "num_base_bdevs": 4, 00:15:24.991 "num_base_bdevs_discovered": 3, 00:15:24.992 "num_base_bdevs_operational": 3, 00:15:24.992 "process": { 00:15:24.992 "type": "rebuild", 00:15:24.992 "target": "spare", 00:15:24.992 "progress": { 00:15:24.992 "blocks": 20480, 00:15:24.992 "percent": 32 00:15:24.992 } 00:15:24.992 }, 00:15:24.992 "base_bdevs_list": [ 00:15:24.992 { 00:15:24.992 "name": "spare", 00:15:24.992 "uuid": "b655ef06-8796-54cb-949b-6883a1872165", 00:15:24.992 "is_configured": true, 00:15:24.992 "data_offset": 2048, 00:15:24.992 "data_size": 63488 00:15:24.992 }, 00:15:24.992 { 00:15:24.992 "name": null, 00:15:24.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.992 "is_configured": false, 00:15:24.992 "data_offset": 2048, 00:15:24.992 "data_size": 63488 00:15:24.992 }, 00:15:24.992 { 00:15:24.992 "name": "BaseBdev3", 00:15:24.992 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:24.992 "is_configured": true, 00:15:24.992 "data_offset": 2048, 00:15:24.992 "data_size": 63488 00:15:24.992 }, 00:15:24.992 { 00:15:24.992 "name": "BaseBdev4", 00:15:24.992 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:24.992 "is_configured": true, 00:15:24.992 "data_offset": 2048, 00:15:24.992 "data_size": 63488 00:15:24.992 } 00:15:24.992 ] 00:15:24.992 }' 00:15:24.992 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.992 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.992 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.992 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.992 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:24.992 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.992 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.250 [2024-11-26 19:02:16.358645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.250 [2024-11-26 19:02:16.386504] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.250 [2024-11-26 19:02:16.386601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.250 [2024-11-26 19:02:16.386625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.250 [2024-11-26 19:02:16.386641] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.250 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.250 "name": "raid_bdev1", 00:15:25.250 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:25.250 "strip_size_kb": 0, 00:15:25.250 "state": "online", 00:15:25.250 "raid_level": "raid1", 00:15:25.250 "superblock": true, 00:15:25.250 "num_base_bdevs": 4, 00:15:25.250 "num_base_bdevs_discovered": 2, 00:15:25.250 "num_base_bdevs_operational": 2, 00:15:25.250 "base_bdevs_list": [ 00:15:25.250 { 00:15:25.250 "name": null, 00:15:25.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.250 "is_configured": false, 00:15:25.250 "data_offset": 0, 00:15:25.250 "data_size": 63488 00:15:25.250 }, 00:15:25.250 { 00:15:25.250 "name": null, 00:15:25.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.250 "is_configured": false, 00:15:25.250 "data_offset": 2048, 00:15:25.250 "data_size": 63488 00:15:25.250 }, 00:15:25.250 { 00:15:25.250 "name": "BaseBdev3", 00:15:25.250 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:25.250 "is_configured": true, 00:15:25.251 "data_offset": 2048, 00:15:25.251 "data_size": 63488 00:15:25.251 }, 00:15:25.251 { 00:15:25.251 "name": "BaseBdev4", 00:15:25.251 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:25.251 "is_configured": true, 00:15:25.251 "data_offset": 2048, 00:15:25.251 "data_size": 63488 00:15:25.251 } 00:15:25.251 ] 00:15:25.251 }' 00:15:25.251 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.251 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.817 "name": "raid_bdev1", 00:15:25.817 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:25.817 "strip_size_kb": 0, 00:15:25.817 "state": "online", 00:15:25.817 "raid_level": "raid1", 00:15:25.817 "superblock": true, 00:15:25.817 "num_base_bdevs": 4, 00:15:25.817 "num_base_bdevs_discovered": 2, 00:15:25.817 "num_base_bdevs_operational": 2, 00:15:25.817 "base_bdevs_list": [ 00:15:25.817 { 00:15:25.817 "name": null, 00:15:25.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.817 "is_configured": false, 00:15:25.817 "data_offset": 0, 00:15:25.817 "data_size": 63488 00:15:25.817 }, 00:15:25.817 { 00:15:25.817 "name": null, 00:15:25.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.817 "is_configured": false, 00:15:25.817 "data_offset": 2048, 00:15:25.817 "data_size": 63488 00:15:25.817 }, 00:15:25.817 { 00:15:25.817 "name": "BaseBdev3", 00:15:25.817 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:25.817 "is_configured": true, 00:15:25.817 "data_offset": 2048, 00:15:25.817 "data_size": 63488 00:15:25.817 }, 00:15:25.817 { 00:15:25.817 "name": "BaseBdev4", 00:15:25.817 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:25.817 "is_configured": true, 00:15:25.817 "data_offset": 2048, 00:15:25.817 "data_size": 63488 00:15:25.817 } 00:15:25.817 ] 00:15:25.817 }' 00:15:25.817 19:02:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.817 [2024-11-26 19:02:17.086604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:25.817 [2024-11-26 19:02:17.086692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.817 [2024-11-26 19:02:17.086720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:25.817 [2024-11-26 19:02:17.086737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.817 [2024-11-26 19:02:17.087389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.817 [2024-11-26 19:02:17.087428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.817 [2024-11-26 19:02:17.087529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:25.817 [2024-11-26 19:02:17.087594] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:25.817 [2024-11-26 19:02:17.087607] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:25.817 [2024-11-26 19:02:17.087623] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:25.817 BaseBdev1 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.817 19:02:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.749 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.005 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.005 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.005 "name": "raid_bdev1", 00:15:27.005 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:27.005 "strip_size_kb": 0, 00:15:27.005 "state": "online", 00:15:27.005 "raid_level": "raid1", 00:15:27.005 "superblock": true, 00:15:27.005 "num_base_bdevs": 4, 00:15:27.005 "num_base_bdevs_discovered": 2, 00:15:27.005 "num_base_bdevs_operational": 2, 00:15:27.005 "base_bdevs_list": [ 00:15:27.005 { 00:15:27.005 "name": null, 00:15:27.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.005 "is_configured": false, 00:15:27.005 "data_offset": 0, 00:15:27.005 "data_size": 63488 00:15:27.005 }, 00:15:27.005 { 00:15:27.005 "name": null, 00:15:27.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.005 "is_configured": false, 00:15:27.005 "data_offset": 2048, 00:15:27.005 "data_size": 63488 00:15:27.005 }, 00:15:27.005 { 00:15:27.005 "name": "BaseBdev3", 00:15:27.005 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:27.005 "is_configured": true, 00:15:27.006 "data_offset": 2048, 00:15:27.006 "data_size": 63488 00:15:27.006 }, 00:15:27.006 { 00:15:27.006 "name": "BaseBdev4", 00:15:27.006 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:27.006 "is_configured": true, 00:15:27.006 "data_offset": 2048, 00:15:27.006 "data_size": 63488 00:15:27.006 } 00:15:27.006 ] 00:15:27.006 }' 00:15:27.006 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.006 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.261 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.572 "name": "raid_bdev1", 00:15:27.572 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:27.572 "strip_size_kb": 0, 00:15:27.572 "state": "online", 00:15:27.572 "raid_level": "raid1", 00:15:27.572 "superblock": true, 00:15:27.572 "num_base_bdevs": 4, 00:15:27.572 "num_base_bdevs_discovered": 2, 00:15:27.572 "num_base_bdevs_operational": 2, 00:15:27.572 "base_bdevs_list": [ 00:15:27.572 { 00:15:27.572 "name": null, 00:15:27.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.572 "is_configured": false, 00:15:27.572 "data_offset": 0, 00:15:27.572 "data_size": 63488 00:15:27.572 }, 00:15:27.572 { 00:15:27.572 "name": null, 00:15:27.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.572 "is_configured": false, 00:15:27.572 "data_offset": 2048, 00:15:27.572 "data_size": 63488 00:15:27.572 }, 00:15:27.572 { 00:15:27.572 "name": "BaseBdev3", 00:15:27.572 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:27.572 "is_configured": true, 00:15:27.572 "data_offset": 2048, 00:15:27.572 "data_size": 63488 00:15:27.572 }, 00:15:27.572 { 00:15:27.572 "name": "BaseBdev4", 00:15:27.572 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:27.572 "is_configured": true, 00:15:27.572 "data_offset": 2048, 00:15:27.572 "data_size": 63488 00:15:27.572 } 00:15:27.572 ] 00:15:27.572 }' 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.572 [2024-11-26 19:02:18.767483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.572 [2024-11-26 19:02:18.767749] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:27.572 [2024-11-26 19:02:18.767771] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:27.572 request: 00:15:27.572 { 00:15:27.572 "base_bdev": "BaseBdev1", 00:15:27.572 "raid_bdev": "raid_bdev1", 00:15:27.572 "method": "bdev_raid_add_base_bdev", 00:15:27.572 "req_id": 1 00:15:27.572 } 00:15:27.572 Got JSON-RPC error response 00:15:27.572 response: 00:15:27.572 { 00:15:27.572 "code": -22, 00:15:27.572 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:27.572 } 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.572 19:02:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.521 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.522 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.522 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.522 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.522 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.522 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.522 "name": "raid_bdev1", 00:15:28.522 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:28.522 "strip_size_kb": 0, 00:15:28.522 "state": "online", 00:15:28.522 "raid_level": "raid1", 00:15:28.522 "superblock": true, 00:15:28.522 "num_base_bdevs": 4, 00:15:28.522 "num_base_bdevs_discovered": 2, 00:15:28.522 "num_base_bdevs_operational": 2, 00:15:28.522 "base_bdevs_list": [ 00:15:28.522 { 00:15:28.522 "name": null, 00:15:28.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.522 "is_configured": false, 00:15:28.522 "data_offset": 0, 00:15:28.522 "data_size": 63488 00:15:28.522 }, 00:15:28.522 { 00:15:28.522 "name": null, 00:15:28.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.522 "is_configured": false, 00:15:28.522 "data_offset": 2048, 00:15:28.522 "data_size": 63488 00:15:28.522 }, 00:15:28.522 { 00:15:28.522 "name": "BaseBdev3", 00:15:28.522 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:28.522 "is_configured": true, 00:15:28.522 "data_offset": 2048, 00:15:28.522 "data_size": 63488 00:15:28.522 }, 00:15:28.522 { 00:15:28.522 "name": "BaseBdev4", 00:15:28.522 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:28.522 "is_configured": true, 00:15:28.522 "data_offset": 2048, 00:15:28.522 "data_size": 63488 00:15:28.522 } 00:15:28.522 ] 00:15:28.522 }' 00:15:28.522 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.522 19:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.087 "name": "raid_bdev1", 00:15:29.087 "uuid": "863532e9-42f8-48b4-89e7-40ba5c1f8940", 00:15:29.087 "strip_size_kb": 0, 00:15:29.087 "state": "online", 00:15:29.087 "raid_level": "raid1", 00:15:29.087 "superblock": true, 00:15:29.087 "num_base_bdevs": 4, 00:15:29.087 "num_base_bdevs_discovered": 2, 00:15:29.087 "num_base_bdevs_operational": 2, 00:15:29.087 "base_bdevs_list": [ 00:15:29.087 { 00:15:29.087 "name": null, 00:15:29.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.087 "is_configured": false, 00:15:29.087 "data_offset": 0, 00:15:29.087 "data_size": 63488 00:15:29.087 }, 00:15:29.087 { 00:15:29.087 "name": null, 00:15:29.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.087 "is_configured": false, 00:15:29.087 "data_offset": 2048, 00:15:29.087 "data_size": 63488 00:15:29.087 }, 00:15:29.087 { 00:15:29.087 "name": "BaseBdev3", 00:15:29.087 "uuid": "8fe82444-ea7e-5b43-96d5-8ed36f963d4e", 00:15:29.087 "is_configured": true, 00:15:29.087 "data_offset": 2048, 00:15:29.087 "data_size": 63488 00:15:29.087 }, 00:15:29.087 { 00:15:29.087 "name": "BaseBdev4", 00:15:29.087 "uuid": "ea907599-02ac-5c1c-8194-65cc0d549355", 00:15:29.087 "is_configured": true, 00:15:29.087 "data_offset": 2048, 00:15:29.087 "data_size": 63488 00:15:29.087 } 00:15:29.087 ] 00:15:29.087 }' 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79539 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79539 ']' 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79539 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.087 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79539 00:15:29.345 killing process with pid 79539 00:15:29.345 Received shutdown signal, test time was about 19.474894 seconds 00:15:29.345 00:15:29.345 Latency(us) 00:15:29.345 [2024-11-26T19:02:20.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.345 [2024-11-26T19:02:20.712Z] =================================================================================================================== 00:15:29.345 [2024-11-26T19:02:20.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.345 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.345 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.346 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79539' 00:15:29.346 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79539 00:15:29.346 [2024-11-26 19:02:20.470806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.346 19:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79539 00:15:29.346 [2024-11-26 19:02:20.471021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.346 [2024-11-26 19:02:20.471137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.346 [2024-11-26 19:02:20.471155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:29.604 [2024-11-26 19:02:20.847101] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.980 ************************************ 00:15:30.980 END TEST raid_rebuild_test_sb_io 00:15:30.980 ************************************ 00:15:30.980 19:02:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:30.980 00:15:30.980 real 0m23.165s 00:15:30.980 user 0m31.486s 00:15:30.980 sys 0m2.448s 00:15:30.980 19:02:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.980 19:02:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.980 19:02:21 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:30.980 19:02:21 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:30.980 19:02:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:30.980 19:02:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.980 19:02:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.980 ************************************ 00:15:30.980 START TEST raid5f_state_function_test 00:15:30.980 ************************************ 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80278 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80278' 00:15:30.980 Process raid pid: 80278 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80278 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80278 ']' 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.980 19:02:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.980 [2024-11-26 19:02:22.127136] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:15:30.980 [2024-11-26 19:02:22.127588] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.980 [2024-11-26 19:02:22.312403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.239 [2024-11-26 19:02:22.451117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.509 [2024-11-26 19:02:22.656072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.509 [2024-11-26 19:02:22.656344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.772 [2024-11-26 19:02:23.083378] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.772 [2024-11-26 19:02:23.083448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.772 [2024-11-26 19:02:23.083467] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.772 [2024-11-26 19:02:23.083485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.772 [2024-11-26 19:02:23.083495] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.772 [2024-11-26 19:02:23.083510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.772 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.030 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.030 "name": "Existed_Raid", 00:15:32.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.030 "strip_size_kb": 64, 00:15:32.030 "state": "configuring", 00:15:32.030 "raid_level": "raid5f", 00:15:32.030 "superblock": false, 00:15:32.030 "num_base_bdevs": 3, 00:15:32.030 "num_base_bdevs_discovered": 0, 00:15:32.030 "num_base_bdevs_operational": 3, 00:15:32.030 "base_bdevs_list": [ 00:15:32.030 { 00:15:32.030 "name": "BaseBdev1", 00:15:32.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.030 "is_configured": false, 00:15:32.030 "data_offset": 0, 00:15:32.030 "data_size": 0 00:15:32.030 }, 00:15:32.030 { 00:15:32.030 "name": "BaseBdev2", 00:15:32.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.030 "is_configured": false, 00:15:32.030 "data_offset": 0, 00:15:32.030 "data_size": 0 00:15:32.030 }, 00:15:32.030 { 00:15:32.030 "name": "BaseBdev3", 00:15:32.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.030 "is_configured": false, 00:15:32.030 "data_offset": 0, 00:15:32.030 "data_size": 0 00:15:32.030 } 00:15:32.030 ] 00:15:32.030 }' 00:15:32.030 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.030 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.287 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.287 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.287 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.287 [2024-11-26 19:02:23.595559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.287 [2024-11-26 19:02:23.595602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:32.287 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.287 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:32.287 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.287 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.287 [2024-11-26 19:02:23.603513] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.287 [2024-11-26 19:02:23.603572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.287 [2024-11-26 19:02:23.603590] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.287 [2024-11-26 19:02:23.603606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.287 [2024-11-26 19:02:23.603616] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.288 [2024-11-26 19:02:23.603631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.288 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.288 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.288 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.288 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.288 [2024-11-26 19:02:23.650291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.288 BaseBdev1 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.546 [ 00:15:32.546 { 00:15:32.546 "name": "BaseBdev1", 00:15:32.546 "aliases": [ 00:15:32.546 "3380585f-ba19-44c0-8e0c-136124f7f8bf" 00:15:32.546 ], 00:15:32.546 "product_name": "Malloc disk", 00:15:32.546 "block_size": 512, 00:15:32.546 "num_blocks": 65536, 00:15:32.546 "uuid": "3380585f-ba19-44c0-8e0c-136124f7f8bf", 00:15:32.546 "assigned_rate_limits": { 00:15:32.546 "rw_ios_per_sec": 0, 00:15:32.546 "rw_mbytes_per_sec": 0, 00:15:32.546 "r_mbytes_per_sec": 0, 00:15:32.546 "w_mbytes_per_sec": 0 00:15:32.546 }, 00:15:32.546 "claimed": true, 00:15:32.546 "claim_type": "exclusive_write", 00:15:32.546 "zoned": false, 00:15:32.546 "supported_io_types": { 00:15:32.546 "read": true, 00:15:32.546 "write": true, 00:15:32.546 "unmap": true, 00:15:32.546 "flush": true, 00:15:32.546 "reset": true, 00:15:32.546 "nvme_admin": false, 00:15:32.546 "nvme_io": false, 00:15:32.546 "nvme_io_md": false, 00:15:32.546 "write_zeroes": true, 00:15:32.546 "zcopy": true, 00:15:32.546 "get_zone_info": false, 00:15:32.546 "zone_management": false, 00:15:32.546 "zone_append": false, 00:15:32.546 "compare": false, 00:15:32.546 "compare_and_write": false, 00:15:32.546 "abort": true, 00:15:32.546 "seek_hole": false, 00:15:32.546 "seek_data": false, 00:15:32.546 "copy": true, 00:15:32.546 "nvme_iov_md": false 00:15:32.546 }, 00:15:32.546 "memory_domains": [ 00:15:32.546 { 00:15:32.546 "dma_device_id": "system", 00:15:32.546 "dma_device_type": 1 00:15:32.546 }, 00:15:32.546 { 00:15:32.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.546 "dma_device_type": 2 00:15:32.546 } 00:15:32.546 ], 00:15:32.546 "driver_specific": {} 00:15:32.546 } 00:15:32.546 ] 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.546 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.547 "name": "Existed_Raid", 00:15:32.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.547 "strip_size_kb": 64, 00:15:32.547 "state": "configuring", 00:15:32.547 "raid_level": "raid5f", 00:15:32.547 "superblock": false, 00:15:32.547 "num_base_bdevs": 3, 00:15:32.547 "num_base_bdevs_discovered": 1, 00:15:32.547 "num_base_bdevs_operational": 3, 00:15:32.547 "base_bdevs_list": [ 00:15:32.547 { 00:15:32.547 "name": "BaseBdev1", 00:15:32.547 "uuid": "3380585f-ba19-44c0-8e0c-136124f7f8bf", 00:15:32.547 "is_configured": true, 00:15:32.547 "data_offset": 0, 00:15:32.547 "data_size": 65536 00:15:32.547 }, 00:15:32.547 { 00:15:32.547 "name": "BaseBdev2", 00:15:32.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.547 "is_configured": false, 00:15:32.547 "data_offset": 0, 00:15:32.547 "data_size": 0 00:15:32.547 }, 00:15:32.547 { 00:15:32.547 "name": "BaseBdev3", 00:15:32.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.547 "is_configured": false, 00:15:32.547 "data_offset": 0, 00:15:32.547 "data_size": 0 00:15:32.547 } 00:15:32.547 ] 00:15:32.547 }' 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.547 19:02:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.117 [2024-11-26 19:02:24.194516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.117 [2024-11-26 19:02:24.194578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.117 [2024-11-26 19:02:24.202570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.117 [2024-11-26 19:02:24.205160] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.117 [2024-11-26 19:02:24.205217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.117 [2024-11-26 19:02:24.205236] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.117 [2024-11-26 19:02:24.205253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.117 "name": "Existed_Raid", 00:15:33.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.117 "strip_size_kb": 64, 00:15:33.117 "state": "configuring", 00:15:33.117 "raid_level": "raid5f", 00:15:33.117 "superblock": false, 00:15:33.117 "num_base_bdevs": 3, 00:15:33.117 "num_base_bdevs_discovered": 1, 00:15:33.117 "num_base_bdevs_operational": 3, 00:15:33.117 "base_bdevs_list": [ 00:15:33.117 { 00:15:33.117 "name": "BaseBdev1", 00:15:33.117 "uuid": "3380585f-ba19-44c0-8e0c-136124f7f8bf", 00:15:33.117 "is_configured": true, 00:15:33.117 "data_offset": 0, 00:15:33.117 "data_size": 65536 00:15:33.117 }, 00:15:33.117 { 00:15:33.117 "name": "BaseBdev2", 00:15:33.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.117 "is_configured": false, 00:15:33.117 "data_offset": 0, 00:15:33.117 "data_size": 0 00:15:33.117 }, 00:15:33.117 { 00:15:33.117 "name": "BaseBdev3", 00:15:33.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.117 "is_configured": false, 00:15:33.117 "data_offset": 0, 00:15:33.117 "data_size": 0 00:15:33.117 } 00:15:33.117 ] 00:15:33.117 }' 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.117 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.376 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.376 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.376 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.635 [2024-11-26 19:02:24.762671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.635 BaseBdev2 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.635 [ 00:15:33.635 { 00:15:33.635 "name": "BaseBdev2", 00:15:33.635 "aliases": [ 00:15:33.635 "2adb8941-57c0-4920-b838-bf3680168707" 00:15:33.635 ], 00:15:33.635 "product_name": "Malloc disk", 00:15:33.635 "block_size": 512, 00:15:33.635 "num_blocks": 65536, 00:15:33.635 "uuid": "2adb8941-57c0-4920-b838-bf3680168707", 00:15:33.635 "assigned_rate_limits": { 00:15:33.635 "rw_ios_per_sec": 0, 00:15:33.635 "rw_mbytes_per_sec": 0, 00:15:33.635 "r_mbytes_per_sec": 0, 00:15:33.635 "w_mbytes_per_sec": 0 00:15:33.635 }, 00:15:33.635 "claimed": true, 00:15:33.635 "claim_type": "exclusive_write", 00:15:33.635 "zoned": false, 00:15:33.635 "supported_io_types": { 00:15:33.635 "read": true, 00:15:33.635 "write": true, 00:15:33.635 "unmap": true, 00:15:33.635 "flush": true, 00:15:33.635 "reset": true, 00:15:33.635 "nvme_admin": false, 00:15:33.635 "nvme_io": false, 00:15:33.635 "nvme_io_md": false, 00:15:33.635 "write_zeroes": true, 00:15:33.635 "zcopy": true, 00:15:33.635 "get_zone_info": false, 00:15:33.635 "zone_management": false, 00:15:33.635 "zone_append": false, 00:15:33.635 "compare": false, 00:15:33.635 "compare_and_write": false, 00:15:33.635 "abort": true, 00:15:33.635 "seek_hole": false, 00:15:33.635 "seek_data": false, 00:15:33.635 "copy": true, 00:15:33.635 "nvme_iov_md": false 00:15:33.635 }, 00:15:33.635 "memory_domains": [ 00:15:33.635 { 00:15:33.635 "dma_device_id": "system", 00:15:33.635 "dma_device_type": 1 00:15:33.635 }, 00:15:33.635 { 00:15:33.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.635 "dma_device_type": 2 00:15:33.635 } 00:15:33.635 ], 00:15:33.635 "driver_specific": {} 00:15:33.635 } 00:15:33.635 ] 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.635 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.635 "name": "Existed_Raid", 00:15:33.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.635 "strip_size_kb": 64, 00:15:33.636 "state": "configuring", 00:15:33.636 "raid_level": "raid5f", 00:15:33.636 "superblock": false, 00:15:33.636 "num_base_bdevs": 3, 00:15:33.636 "num_base_bdevs_discovered": 2, 00:15:33.636 "num_base_bdevs_operational": 3, 00:15:33.636 "base_bdevs_list": [ 00:15:33.636 { 00:15:33.636 "name": "BaseBdev1", 00:15:33.636 "uuid": "3380585f-ba19-44c0-8e0c-136124f7f8bf", 00:15:33.636 "is_configured": true, 00:15:33.636 "data_offset": 0, 00:15:33.636 "data_size": 65536 00:15:33.636 }, 00:15:33.636 { 00:15:33.636 "name": "BaseBdev2", 00:15:33.636 "uuid": "2adb8941-57c0-4920-b838-bf3680168707", 00:15:33.636 "is_configured": true, 00:15:33.636 "data_offset": 0, 00:15:33.636 "data_size": 65536 00:15:33.636 }, 00:15:33.636 { 00:15:33.636 "name": "BaseBdev3", 00:15:33.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.636 "is_configured": false, 00:15:33.636 "data_offset": 0, 00:15:33.636 "data_size": 0 00:15:33.636 } 00:15:33.636 ] 00:15:33.636 }' 00:15:33.636 19:02:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.636 19:02:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.235 [2024-11-26 19:02:25.373159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.235 [2024-11-26 19:02:25.373226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:34.235 [2024-11-26 19:02:25.373249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:34.235 [2024-11-26 19:02:25.373591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:34.235 [2024-11-26 19:02:25.378650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:34.235 [2024-11-26 19:02:25.378691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:34.235 [2024-11-26 19:02:25.379083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.235 BaseBdev3 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.235 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.235 [ 00:15:34.235 { 00:15:34.236 "name": "BaseBdev3", 00:15:34.236 "aliases": [ 00:15:34.236 "43c75fdb-04b9-4930-a28d-04bec7d75cb0" 00:15:34.236 ], 00:15:34.236 "product_name": "Malloc disk", 00:15:34.236 "block_size": 512, 00:15:34.236 "num_blocks": 65536, 00:15:34.236 "uuid": "43c75fdb-04b9-4930-a28d-04bec7d75cb0", 00:15:34.236 "assigned_rate_limits": { 00:15:34.236 "rw_ios_per_sec": 0, 00:15:34.236 "rw_mbytes_per_sec": 0, 00:15:34.236 "r_mbytes_per_sec": 0, 00:15:34.236 "w_mbytes_per_sec": 0 00:15:34.236 }, 00:15:34.236 "claimed": true, 00:15:34.236 "claim_type": "exclusive_write", 00:15:34.236 "zoned": false, 00:15:34.236 "supported_io_types": { 00:15:34.236 "read": true, 00:15:34.236 "write": true, 00:15:34.236 "unmap": true, 00:15:34.236 "flush": true, 00:15:34.236 "reset": true, 00:15:34.236 "nvme_admin": false, 00:15:34.236 "nvme_io": false, 00:15:34.236 "nvme_io_md": false, 00:15:34.236 "write_zeroes": true, 00:15:34.236 "zcopy": true, 00:15:34.236 "get_zone_info": false, 00:15:34.236 "zone_management": false, 00:15:34.236 "zone_append": false, 00:15:34.236 "compare": false, 00:15:34.236 "compare_and_write": false, 00:15:34.236 "abort": true, 00:15:34.236 "seek_hole": false, 00:15:34.236 "seek_data": false, 00:15:34.236 "copy": true, 00:15:34.236 "nvme_iov_md": false 00:15:34.236 }, 00:15:34.236 "memory_domains": [ 00:15:34.236 { 00:15:34.236 "dma_device_id": "system", 00:15:34.236 "dma_device_type": 1 00:15:34.236 }, 00:15:34.236 { 00:15:34.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.236 "dma_device_type": 2 00:15:34.236 } 00:15:34.236 ], 00:15:34.236 "driver_specific": {} 00:15:34.236 } 00:15:34.236 ] 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.236 "name": "Existed_Raid", 00:15:34.236 "uuid": "6aa6d908-3c1a-46f9-90a3-7c1f5bb2a9fc", 00:15:34.236 "strip_size_kb": 64, 00:15:34.236 "state": "online", 00:15:34.236 "raid_level": "raid5f", 00:15:34.236 "superblock": false, 00:15:34.236 "num_base_bdevs": 3, 00:15:34.236 "num_base_bdevs_discovered": 3, 00:15:34.236 "num_base_bdevs_operational": 3, 00:15:34.236 "base_bdevs_list": [ 00:15:34.236 { 00:15:34.236 "name": "BaseBdev1", 00:15:34.236 "uuid": "3380585f-ba19-44c0-8e0c-136124f7f8bf", 00:15:34.236 "is_configured": true, 00:15:34.236 "data_offset": 0, 00:15:34.236 "data_size": 65536 00:15:34.236 }, 00:15:34.236 { 00:15:34.236 "name": "BaseBdev2", 00:15:34.236 "uuid": "2adb8941-57c0-4920-b838-bf3680168707", 00:15:34.236 "is_configured": true, 00:15:34.236 "data_offset": 0, 00:15:34.236 "data_size": 65536 00:15:34.236 }, 00:15:34.236 { 00:15:34.236 "name": "BaseBdev3", 00:15:34.236 "uuid": "43c75fdb-04b9-4930-a28d-04bec7d75cb0", 00:15:34.236 "is_configured": true, 00:15:34.236 "data_offset": 0, 00:15:34.236 "data_size": 65536 00:15:34.236 } 00:15:34.236 ] 00:15:34.236 }' 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.236 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.814 [2024-11-26 19:02:25.937092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.814 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.814 "name": "Existed_Raid", 00:15:34.814 "aliases": [ 00:15:34.814 "6aa6d908-3c1a-46f9-90a3-7c1f5bb2a9fc" 00:15:34.814 ], 00:15:34.814 "product_name": "Raid Volume", 00:15:34.814 "block_size": 512, 00:15:34.814 "num_blocks": 131072, 00:15:34.814 "uuid": "6aa6d908-3c1a-46f9-90a3-7c1f5bb2a9fc", 00:15:34.814 "assigned_rate_limits": { 00:15:34.814 "rw_ios_per_sec": 0, 00:15:34.814 "rw_mbytes_per_sec": 0, 00:15:34.814 "r_mbytes_per_sec": 0, 00:15:34.814 "w_mbytes_per_sec": 0 00:15:34.814 }, 00:15:34.814 "claimed": false, 00:15:34.814 "zoned": false, 00:15:34.814 "supported_io_types": { 00:15:34.814 "read": true, 00:15:34.814 "write": true, 00:15:34.814 "unmap": false, 00:15:34.814 "flush": false, 00:15:34.814 "reset": true, 00:15:34.814 "nvme_admin": false, 00:15:34.814 "nvme_io": false, 00:15:34.814 "nvme_io_md": false, 00:15:34.814 "write_zeroes": true, 00:15:34.814 "zcopy": false, 00:15:34.814 "get_zone_info": false, 00:15:34.814 "zone_management": false, 00:15:34.814 "zone_append": false, 00:15:34.814 "compare": false, 00:15:34.814 "compare_and_write": false, 00:15:34.814 "abort": false, 00:15:34.814 "seek_hole": false, 00:15:34.814 "seek_data": false, 00:15:34.814 "copy": false, 00:15:34.814 "nvme_iov_md": false 00:15:34.814 }, 00:15:34.814 "driver_specific": { 00:15:34.814 "raid": { 00:15:34.814 "uuid": "6aa6d908-3c1a-46f9-90a3-7c1f5bb2a9fc", 00:15:34.814 "strip_size_kb": 64, 00:15:34.814 "state": "online", 00:15:34.814 "raid_level": "raid5f", 00:15:34.814 "superblock": false, 00:15:34.814 "num_base_bdevs": 3, 00:15:34.814 "num_base_bdevs_discovered": 3, 00:15:34.814 "num_base_bdevs_operational": 3, 00:15:34.814 "base_bdevs_list": [ 00:15:34.814 { 00:15:34.814 "name": "BaseBdev1", 00:15:34.814 "uuid": "3380585f-ba19-44c0-8e0c-136124f7f8bf", 00:15:34.814 "is_configured": true, 00:15:34.814 "data_offset": 0, 00:15:34.814 "data_size": 65536 00:15:34.814 }, 00:15:34.814 { 00:15:34.814 "name": "BaseBdev2", 00:15:34.814 "uuid": "2adb8941-57c0-4920-b838-bf3680168707", 00:15:34.814 "is_configured": true, 00:15:34.814 "data_offset": 0, 00:15:34.814 "data_size": 65536 00:15:34.814 }, 00:15:34.814 { 00:15:34.814 "name": "BaseBdev3", 00:15:34.814 "uuid": "43c75fdb-04b9-4930-a28d-04bec7d75cb0", 00:15:34.814 "is_configured": true, 00:15:34.814 "data_offset": 0, 00:15:34.814 "data_size": 65536 00:15:34.815 } 00:15:34.815 ] 00:15:34.815 } 00:15:34.815 } 00:15:34.815 }' 00:15:34.815 19:02:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:34.815 BaseBdev2 00:15:34.815 BaseBdev3' 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.815 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.073 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.073 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.073 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.073 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:35.073 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.073 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.074 [2024-11-26 19:02:26.268922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.074 "name": "Existed_Raid", 00:15:35.074 "uuid": "6aa6d908-3c1a-46f9-90a3-7c1f5bb2a9fc", 00:15:35.074 "strip_size_kb": 64, 00:15:35.074 "state": "online", 00:15:35.074 "raid_level": "raid5f", 00:15:35.074 "superblock": false, 00:15:35.074 "num_base_bdevs": 3, 00:15:35.074 "num_base_bdevs_discovered": 2, 00:15:35.074 "num_base_bdevs_operational": 2, 00:15:35.074 "base_bdevs_list": [ 00:15:35.074 { 00:15:35.074 "name": null, 00:15:35.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.074 "is_configured": false, 00:15:35.074 "data_offset": 0, 00:15:35.074 "data_size": 65536 00:15:35.074 }, 00:15:35.074 { 00:15:35.074 "name": "BaseBdev2", 00:15:35.074 "uuid": "2adb8941-57c0-4920-b838-bf3680168707", 00:15:35.074 "is_configured": true, 00:15:35.074 "data_offset": 0, 00:15:35.074 "data_size": 65536 00:15:35.074 }, 00:15:35.074 { 00:15:35.074 "name": "BaseBdev3", 00:15:35.074 "uuid": "43c75fdb-04b9-4930-a28d-04bec7d75cb0", 00:15:35.074 "is_configured": true, 00:15:35.074 "data_offset": 0, 00:15:35.074 "data_size": 65536 00:15:35.074 } 00:15:35.074 ] 00:15:35.074 }' 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.074 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.640 19:02:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.640 [2024-11-26 19:02:26.937863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.640 [2024-11-26 19:02:26.938041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.898 [2024-11-26 19:02:27.015863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.898 [2024-11-26 19:02:27.079916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.898 [2024-11-26 19:02:27.079998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.898 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.157 BaseBdev2 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.157 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.158 [ 00:15:36.158 { 00:15:36.158 "name": "BaseBdev2", 00:15:36.158 "aliases": [ 00:15:36.158 "99857352-6314-443f-bddc-28c35f72c158" 00:15:36.158 ], 00:15:36.158 "product_name": "Malloc disk", 00:15:36.158 "block_size": 512, 00:15:36.158 "num_blocks": 65536, 00:15:36.158 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:36.158 "assigned_rate_limits": { 00:15:36.158 "rw_ios_per_sec": 0, 00:15:36.158 "rw_mbytes_per_sec": 0, 00:15:36.158 "r_mbytes_per_sec": 0, 00:15:36.158 "w_mbytes_per_sec": 0 00:15:36.158 }, 00:15:36.158 "claimed": false, 00:15:36.158 "zoned": false, 00:15:36.158 "supported_io_types": { 00:15:36.158 "read": true, 00:15:36.158 "write": true, 00:15:36.158 "unmap": true, 00:15:36.158 "flush": true, 00:15:36.158 "reset": true, 00:15:36.158 "nvme_admin": false, 00:15:36.158 "nvme_io": false, 00:15:36.158 "nvme_io_md": false, 00:15:36.158 "write_zeroes": true, 00:15:36.158 "zcopy": true, 00:15:36.158 "get_zone_info": false, 00:15:36.158 "zone_management": false, 00:15:36.158 "zone_append": false, 00:15:36.158 "compare": false, 00:15:36.158 "compare_and_write": false, 00:15:36.158 "abort": true, 00:15:36.158 "seek_hole": false, 00:15:36.158 "seek_data": false, 00:15:36.158 "copy": true, 00:15:36.158 "nvme_iov_md": false 00:15:36.158 }, 00:15:36.158 "memory_domains": [ 00:15:36.158 { 00:15:36.158 "dma_device_id": "system", 00:15:36.158 "dma_device_type": 1 00:15:36.158 }, 00:15:36.158 { 00:15:36.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.158 "dma_device_type": 2 00:15:36.158 } 00:15:36.158 ], 00:15:36.158 "driver_specific": {} 00:15:36.158 } 00:15:36.158 ] 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.158 BaseBdev3 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.158 [ 00:15:36.158 { 00:15:36.158 "name": "BaseBdev3", 00:15:36.158 "aliases": [ 00:15:36.158 "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3" 00:15:36.158 ], 00:15:36.158 "product_name": "Malloc disk", 00:15:36.158 "block_size": 512, 00:15:36.158 "num_blocks": 65536, 00:15:36.158 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:36.158 "assigned_rate_limits": { 00:15:36.158 "rw_ios_per_sec": 0, 00:15:36.158 "rw_mbytes_per_sec": 0, 00:15:36.158 "r_mbytes_per_sec": 0, 00:15:36.158 "w_mbytes_per_sec": 0 00:15:36.158 }, 00:15:36.158 "claimed": false, 00:15:36.158 "zoned": false, 00:15:36.158 "supported_io_types": { 00:15:36.158 "read": true, 00:15:36.158 "write": true, 00:15:36.158 "unmap": true, 00:15:36.158 "flush": true, 00:15:36.158 "reset": true, 00:15:36.158 "nvme_admin": false, 00:15:36.158 "nvme_io": false, 00:15:36.158 "nvme_io_md": false, 00:15:36.158 "write_zeroes": true, 00:15:36.158 "zcopy": true, 00:15:36.158 "get_zone_info": false, 00:15:36.158 "zone_management": false, 00:15:36.158 "zone_append": false, 00:15:36.158 "compare": false, 00:15:36.158 "compare_and_write": false, 00:15:36.158 "abort": true, 00:15:36.158 "seek_hole": false, 00:15:36.158 "seek_data": false, 00:15:36.158 "copy": true, 00:15:36.158 "nvme_iov_md": false 00:15:36.158 }, 00:15:36.158 "memory_domains": [ 00:15:36.158 { 00:15:36.158 "dma_device_id": "system", 00:15:36.158 "dma_device_type": 1 00:15:36.158 }, 00:15:36.158 { 00:15:36.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.158 "dma_device_type": 2 00:15:36.158 } 00:15:36.158 ], 00:15:36.158 "driver_specific": {} 00:15:36.158 } 00:15:36.158 ] 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.158 [2024-11-26 19:02:27.379921] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.158 [2024-11-26 19:02:27.379977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.158 [2024-11-26 19:02:27.380011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.158 [2024-11-26 19:02:27.382639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.158 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.159 "name": "Existed_Raid", 00:15:36.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.159 "strip_size_kb": 64, 00:15:36.159 "state": "configuring", 00:15:36.159 "raid_level": "raid5f", 00:15:36.159 "superblock": false, 00:15:36.159 "num_base_bdevs": 3, 00:15:36.159 "num_base_bdevs_discovered": 2, 00:15:36.159 "num_base_bdevs_operational": 3, 00:15:36.159 "base_bdevs_list": [ 00:15:36.159 { 00:15:36.159 "name": "BaseBdev1", 00:15:36.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.159 "is_configured": false, 00:15:36.159 "data_offset": 0, 00:15:36.159 "data_size": 0 00:15:36.159 }, 00:15:36.159 { 00:15:36.159 "name": "BaseBdev2", 00:15:36.159 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:36.159 "is_configured": true, 00:15:36.159 "data_offset": 0, 00:15:36.159 "data_size": 65536 00:15:36.159 }, 00:15:36.159 { 00:15:36.159 "name": "BaseBdev3", 00:15:36.159 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:36.159 "is_configured": true, 00:15:36.159 "data_offset": 0, 00:15:36.159 "data_size": 65536 00:15:36.159 } 00:15:36.159 ] 00:15:36.159 }' 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.159 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.726 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:36.726 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.726 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.726 [2024-11-26 19:02:27.912077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:36.726 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.726 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.726 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.726 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.727 "name": "Existed_Raid", 00:15:36.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.727 "strip_size_kb": 64, 00:15:36.727 "state": "configuring", 00:15:36.727 "raid_level": "raid5f", 00:15:36.727 "superblock": false, 00:15:36.727 "num_base_bdevs": 3, 00:15:36.727 "num_base_bdevs_discovered": 1, 00:15:36.727 "num_base_bdevs_operational": 3, 00:15:36.727 "base_bdevs_list": [ 00:15:36.727 { 00:15:36.727 "name": "BaseBdev1", 00:15:36.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.727 "is_configured": false, 00:15:36.727 "data_offset": 0, 00:15:36.727 "data_size": 0 00:15:36.727 }, 00:15:36.727 { 00:15:36.727 "name": null, 00:15:36.727 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:36.727 "is_configured": false, 00:15:36.727 "data_offset": 0, 00:15:36.727 "data_size": 65536 00:15:36.727 }, 00:15:36.727 { 00:15:36.727 "name": "BaseBdev3", 00:15:36.727 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:36.727 "is_configured": true, 00:15:36.727 "data_offset": 0, 00:15:36.727 "data_size": 65536 00:15:36.727 } 00:15:36.727 ] 00:15:36.727 }' 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.727 19:02:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.294 [2024-11-26 19:02:28.499579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.294 BaseBdev1 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.294 [ 00:15:37.294 { 00:15:37.294 "name": "BaseBdev1", 00:15:37.294 "aliases": [ 00:15:37.294 "64e8878d-0901-4050-bc95-e9c7c236d7b6" 00:15:37.294 ], 00:15:37.294 "product_name": "Malloc disk", 00:15:37.294 "block_size": 512, 00:15:37.294 "num_blocks": 65536, 00:15:37.294 "uuid": "64e8878d-0901-4050-bc95-e9c7c236d7b6", 00:15:37.294 "assigned_rate_limits": { 00:15:37.294 "rw_ios_per_sec": 0, 00:15:37.294 "rw_mbytes_per_sec": 0, 00:15:37.294 "r_mbytes_per_sec": 0, 00:15:37.294 "w_mbytes_per_sec": 0 00:15:37.294 }, 00:15:37.294 "claimed": true, 00:15:37.294 "claim_type": "exclusive_write", 00:15:37.294 "zoned": false, 00:15:37.294 "supported_io_types": { 00:15:37.294 "read": true, 00:15:37.294 "write": true, 00:15:37.294 "unmap": true, 00:15:37.294 "flush": true, 00:15:37.294 "reset": true, 00:15:37.294 "nvme_admin": false, 00:15:37.294 "nvme_io": false, 00:15:37.294 "nvme_io_md": false, 00:15:37.294 "write_zeroes": true, 00:15:37.294 "zcopy": true, 00:15:37.294 "get_zone_info": false, 00:15:37.294 "zone_management": false, 00:15:37.294 "zone_append": false, 00:15:37.294 "compare": false, 00:15:37.294 "compare_and_write": false, 00:15:37.294 "abort": true, 00:15:37.294 "seek_hole": false, 00:15:37.294 "seek_data": false, 00:15:37.294 "copy": true, 00:15:37.294 "nvme_iov_md": false 00:15:37.294 }, 00:15:37.294 "memory_domains": [ 00:15:37.294 { 00:15:37.294 "dma_device_id": "system", 00:15:37.294 "dma_device_type": 1 00:15:37.294 }, 00:15:37.294 { 00:15:37.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.294 "dma_device_type": 2 00:15:37.294 } 00:15:37.294 ], 00:15:37.294 "driver_specific": {} 00:15:37.294 } 00:15:37.294 ] 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.294 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.294 "name": "Existed_Raid", 00:15:37.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.294 "strip_size_kb": 64, 00:15:37.294 "state": "configuring", 00:15:37.294 "raid_level": "raid5f", 00:15:37.294 "superblock": false, 00:15:37.294 "num_base_bdevs": 3, 00:15:37.294 "num_base_bdevs_discovered": 2, 00:15:37.294 "num_base_bdevs_operational": 3, 00:15:37.294 "base_bdevs_list": [ 00:15:37.294 { 00:15:37.294 "name": "BaseBdev1", 00:15:37.294 "uuid": "64e8878d-0901-4050-bc95-e9c7c236d7b6", 00:15:37.294 "is_configured": true, 00:15:37.294 "data_offset": 0, 00:15:37.294 "data_size": 65536 00:15:37.294 }, 00:15:37.294 { 00:15:37.294 "name": null, 00:15:37.294 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:37.294 "is_configured": false, 00:15:37.294 "data_offset": 0, 00:15:37.295 "data_size": 65536 00:15:37.295 }, 00:15:37.295 { 00:15:37.295 "name": "BaseBdev3", 00:15:37.295 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:37.295 "is_configured": true, 00:15:37.295 "data_offset": 0, 00:15:37.295 "data_size": 65536 00:15:37.295 } 00:15:37.295 ] 00:15:37.295 }' 00:15:37.295 19:02:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.295 19:02:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.861 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.862 [2024-11-26 19:02:29.059772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.862 "name": "Existed_Raid", 00:15:37.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.862 "strip_size_kb": 64, 00:15:37.862 "state": "configuring", 00:15:37.862 "raid_level": "raid5f", 00:15:37.862 "superblock": false, 00:15:37.862 "num_base_bdevs": 3, 00:15:37.862 "num_base_bdevs_discovered": 1, 00:15:37.862 "num_base_bdevs_operational": 3, 00:15:37.862 "base_bdevs_list": [ 00:15:37.862 { 00:15:37.862 "name": "BaseBdev1", 00:15:37.862 "uuid": "64e8878d-0901-4050-bc95-e9c7c236d7b6", 00:15:37.862 "is_configured": true, 00:15:37.862 "data_offset": 0, 00:15:37.862 "data_size": 65536 00:15:37.862 }, 00:15:37.862 { 00:15:37.862 "name": null, 00:15:37.862 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:37.862 "is_configured": false, 00:15:37.862 "data_offset": 0, 00:15:37.862 "data_size": 65536 00:15:37.862 }, 00:15:37.862 { 00:15:37.862 "name": null, 00:15:37.862 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:37.862 "is_configured": false, 00:15:37.862 "data_offset": 0, 00:15:37.862 "data_size": 65536 00:15:37.862 } 00:15:37.862 ] 00:15:37.862 }' 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.862 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.429 [2024-11-26 19:02:29.624006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.429 "name": "Existed_Raid", 00:15:38.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.429 "strip_size_kb": 64, 00:15:38.429 "state": "configuring", 00:15:38.429 "raid_level": "raid5f", 00:15:38.429 "superblock": false, 00:15:38.429 "num_base_bdevs": 3, 00:15:38.429 "num_base_bdevs_discovered": 2, 00:15:38.429 "num_base_bdevs_operational": 3, 00:15:38.429 "base_bdevs_list": [ 00:15:38.429 { 00:15:38.429 "name": "BaseBdev1", 00:15:38.429 "uuid": "64e8878d-0901-4050-bc95-e9c7c236d7b6", 00:15:38.429 "is_configured": true, 00:15:38.429 "data_offset": 0, 00:15:38.429 "data_size": 65536 00:15:38.429 }, 00:15:38.429 { 00:15:38.429 "name": null, 00:15:38.429 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:38.429 "is_configured": false, 00:15:38.429 "data_offset": 0, 00:15:38.429 "data_size": 65536 00:15:38.429 }, 00:15:38.429 { 00:15:38.429 "name": "BaseBdev3", 00:15:38.429 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:38.429 "is_configured": true, 00:15:38.429 "data_offset": 0, 00:15:38.429 "data_size": 65536 00:15:38.429 } 00:15:38.429 ] 00:15:38.429 }' 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.429 19:02:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.997 [2024-11-26 19:02:30.184203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.997 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.997 "name": "Existed_Raid", 00:15:38.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.997 "strip_size_kb": 64, 00:15:38.997 "state": "configuring", 00:15:38.997 "raid_level": "raid5f", 00:15:38.997 "superblock": false, 00:15:38.997 "num_base_bdevs": 3, 00:15:38.997 "num_base_bdevs_discovered": 1, 00:15:38.997 "num_base_bdevs_operational": 3, 00:15:38.997 "base_bdevs_list": [ 00:15:38.997 { 00:15:38.997 "name": null, 00:15:38.997 "uuid": "64e8878d-0901-4050-bc95-e9c7c236d7b6", 00:15:38.997 "is_configured": false, 00:15:38.997 "data_offset": 0, 00:15:38.997 "data_size": 65536 00:15:38.997 }, 00:15:38.997 { 00:15:38.997 "name": null, 00:15:38.997 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:38.997 "is_configured": false, 00:15:38.997 "data_offset": 0, 00:15:38.997 "data_size": 65536 00:15:38.998 }, 00:15:38.998 { 00:15:38.998 "name": "BaseBdev3", 00:15:38.998 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:38.998 "is_configured": true, 00:15:38.998 "data_offset": 0, 00:15:38.998 "data_size": 65536 00:15:38.998 } 00:15:38.998 ] 00:15:38.998 }' 00:15:38.998 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.998 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.566 [2024-11-26 19:02:30.832890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.566 "name": "Existed_Raid", 00:15:39.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.566 "strip_size_kb": 64, 00:15:39.566 "state": "configuring", 00:15:39.566 "raid_level": "raid5f", 00:15:39.566 "superblock": false, 00:15:39.566 "num_base_bdevs": 3, 00:15:39.566 "num_base_bdevs_discovered": 2, 00:15:39.566 "num_base_bdevs_operational": 3, 00:15:39.566 "base_bdevs_list": [ 00:15:39.566 { 00:15:39.566 "name": null, 00:15:39.566 "uuid": "64e8878d-0901-4050-bc95-e9c7c236d7b6", 00:15:39.566 "is_configured": false, 00:15:39.566 "data_offset": 0, 00:15:39.566 "data_size": 65536 00:15:39.566 }, 00:15:39.566 { 00:15:39.566 "name": "BaseBdev2", 00:15:39.566 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:39.566 "is_configured": true, 00:15:39.566 "data_offset": 0, 00:15:39.566 "data_size": 65536 00:15:39.566 }, 00:15:39.566 { 00:15:39.566 "name": "BaseBdev3", 00:15:39.566 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:39.566 "is_configured": true, 00:15:39.566 "data_offset": 0, 00:15:39.566 "data_size": 65536 00:15:39.566 } 00:15:39.566 ] 00:15:39.566 }' 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.566 19:02:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 64e8878d-0901-4050-bc95-e9c7c236d7b6 00:15:40.135 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.136 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.396 [2024-11-26 19:02:31.503258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:40.396 [2024-11-26 19:02:31.503330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:40.396 [2024-11-26 19:02:31.503347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:40.396 [2024-11-26 19:02:31.503685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:40.396 [2024-11-26 19:02:31.508540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:40.396 NewBaseBdev 00:15:40.396 [2024-11-26 19:02:31.508735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:40.396 [2024-11-26 19:02:31.509111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.396 [ 00:15:40.396 { 00:15:40.396 "name": "NewBaseBdev", 00:15:40.396 "aliases": [ 00:15:40.396 "64e8878d-0901-4050-bc95-e9c7c236d7b6" 00:15:40.396 ], 00:15:40.396 "product_name": "Malloc disk", 00:15:40.396 "block_size": 512, 00:15:40.396 "num_blocks": 65536, 00:15:40.396 "uuid": "64e8878d-0901-4050-bc95-e9c7c236d7b6", 00:15:40.396 "assigned_rate_limits": { 00:15:40.396 "rw_ios_per_sec": 0, 00:15:40.396 "rw_mbytes_per_sec": 0, 00:15:40.396 "r_mbytes_per_sec": 0, 00:15:40.396 "w_mbytes_per_sec": 0 00:15:40.396 }, 00:15:40.396 "claimed": true, 00:15:40.396 "claim_type": "exclusive_write", 00:15:40.396 "zoned": false, 00:15:40.396 "supported_io_types": { 00:15:40.396 "read": true, 00:15:40.396 "write": true, 00:15:40.396 "unmap": true, 00:15:40.396 "flush": true, 00:15:40.396 "reset": true, 00:15:40.396 "nvme_admin": false, 00:15:40.396 "nvme_io": false, 00:15:40.396 "nvme_io_md": false, 00:15:40.396 "write_zeroes": true, 00:15:40.396 "zcopy": true, 00:15:40.396 "get_zone_info": false, 00:15:40.396 "zone_management": false, 00:15:40.396 "zone_append": false, 00:15:40.396 "compare": false, 00:15:40.396 "compare_and_write": false, 00:15:40.396 "abort": true, 00:15:40.396 "seek_hole": false, 00:15:40.396 "seek_data": false, 00:15:40.396 "copy": true, 00:15:40.396 "nvme_iov_md": false 00:15:40.396 }, 00:15:40.396 "memory_domains": [ 00:15:40.396 { 00:15:40.396 "dma_device_id": "system", 00:15:40.396 "dma_device_type": 1 00:15:40.396 }, 00:15:40.396 { 00:15:40.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.396 "dma_device_type": 2 00:15:40.396 } 00:15:40.396 ], 00:15:40.396 "driver_specific": {} 00:15:40.396 } 00:15:40.396 ] 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.396 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.397 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.397 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.397 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.397 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.397 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.397 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.397 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.397 "name": "Existed_Raid", 00:15:40.397 "uuid": "b2eef2b7-a9cb-4d35-8269-e3bccfaff082", 00:15:40.397 "strip_size_kb": 64, 00:15:40.397 "state": "online", 00:15:40.397 "raid_level": "raid5f", 00:15:40.397 "superblock": false, 00:15:40.397 "num_base_bdevs": 3, 00:15:40.397 "num_base_bdevs_discovered": 3, 00:15:40.397 "num_base_bdevs_operational": 3, 00:15:40.397 "base_bdevs_list": [ 00:15:40.397 { 00:15:40.397 "name": "NewBaseBdev", 00:15:40.397 "uuid": "64e8878d-0901-4050-bc95-e9c7c236d7b6", 00:15:40.397 "is_configured": true, 00:15:40.397 "data_offset": 0, 00:15:40.397 "data_size": 65536 00:15:40.397 }, 00:15:40.397 { 00:15:40.397 "name": "BaseBdev2", 00:15:40.397 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:40.397 "is_configured": true, 00:15:40.397 "data_offset": 0, 00:15:40.397 "data_size": 65536 00:15:40.397 }, 00:15:40.397 { 00:15:40.397 "name": "BaseBdev3", 00:15:40.397 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:40.397 "is_configured": true, 00:15:40.397 "data_offset": 0, 00:15:40.397 "data_size": 65536 00:15:40.397 } 00:15:40.397 ] 00:15:40.397 }' 00:15:40.397 19:02:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.397 19:02:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.965 [2024-11-26 19:02:32.079093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:40.965 "name": "Existed_Raid", 00:15:40.965 "aliases": [ 00:15:40.965 "b2eef2b7-a9cb-4d35-8269-e3bccfaff082" 00:15:40.965 ], 00:15:40.965 "product_name": "Raid Volume", 00:15:40.965 "block_size": 512, 00:15:40.965 "num_blocks": 131072, 00:15:40.965 "uuid": "b2eef2b7-a9cb-4d35-8269-e3bccfaff082", 00:15:40.965 "assigned_rate_limits": { 00:15:40.965 "rw_ios_per_sec": 0, 00:15:40.965 "rw_mbytes_per_sec": 0, 00:15:40.965 "r_mbytes_per_sec": 0, 00:15:40.965 "w_mbytes_per_sec": 0 00:15:40.965 }, 00:15:40.965 "claimed": false, 00:15:40.965 "zoned": false, 00:15:40.965 "supported_io_types": { 00:15:40.965 "read": true, 00:15:40.965 "write": true, 00:15:40.965 "unmap": false, 00:15:40.965 "flush": false, 00:15:40.965 "reset": true, 00:15:40.965 "nvme_admin": false, 00:15:40.965 "nvme_io": false, 00:15:40.965 "nvme_io_md": false, 00:15:40.965 "write_zeroes": true, 00:15:40.965 "zcopy": false, 00:15:40.965 "get_zone_info": false, 00:15:40.965 "zone_management": false, 00:15:40.965 "zone_append": false, 00:15:40.965 "compare": false, 00:15:40.965 "compare_and_write": false, 00:15:40.965 "abort": false, 00:15:40.965 "seek_hole": false, 00:15:40.965 "seek_data": false, 00:15:40.965 "copy": false, 00:15:40.965 "nvme_iov_md": false 00:15:40.965 }, 00:15:40.965 "driver_specific": { 00:15:40.965 "raid": { 00:15:40.965 "uuid": "b2eef2b7-a9cb-4d35-8269-e3bccfaff082", 00:15:40.965 "strip_size_kb": 64, 00:15:40.965 "state": "online", 00:15:40.965 "raid_level": "raid5f", 00:15:40.965 "superblock": false, 00:15:40.965 "num_base_bdevs": 3, 00:15:40.965 "num_base_bdevs_discovered": 3, 00:15:40.965 "num_base_bdevs_operational": 3, 00:15:40.965 "base_bdevs_list": [ 00:15:40.965 { 00:15:40.965 "name": "NewBaseBdev", 00:15:40.965 "uuid": "64e8878d-0901-4050-bc95-e9c7c236d7b6", 00:15:40.965 "is_configured": true, 00:15:40.965 "data_offset": 0, 00:15:40.965 "data_size": 65536 00:15:40.965 }, 00:15:40.965 { 00:15:40.965 "name": "BaseBdev2", 00:15:40.965 "uuid": "99857352-6314-443f-bddc-28c35f72c158", 00:15:40.965 "is_configured": true, 00:15:40.965 "data_offset": 0, 00:15:40.965 "data_size": 65536 00:15:40.965 }, 00:15:40.965 { 00:15:40.965 "name": "BaseBdev3", 00:15:40.965 "uuid": "3f23c8b7-6c5b-4c70-a7c2-8e59717097d3", 00:15:40.965 "is_configured": true, 00:15:40.965 "data_offset": 0, 00:15:40.965 "data_size": 65536 00:15:40.965 } 00:15:40.965 ] 00:15:40.965 } 00:15:40.965 } 00:15:40.965 }' 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:40.965 BaseBdev2 00:15:40.965 BaseBdev3' 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.965 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.224 [2024-11-26 19:02:32.394864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:41.224 [2024-11-26 19:02:32.394898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.224 [2024-11-26 19:02:32.395039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.224 [2024-11-26 19:02:32.395419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.224 [2024-11-26 19:02:32.395442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80278 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80278 ']' 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80278 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80278 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80278' 00:15:41.224 killing process with pid 80278 00:15:41.224 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80278 00:15:41.224 [2024-11-26 19:02:32.437140] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.225 19:02:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80278 00:15:41.483 [2024-11-26 19:02:32.689298] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.420 19:02:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:42.420 00:15:42.420 real 0m11.729s 00:15:42.420 user 0m19.487s 00:15:42.420 sys 0m1.610s 00:15:42.420 19:02:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.420 ************************************ 00:15:42.420 END TEST raid5f_state_function_test 00:15:42.420 ************************************ 00:15:42.420 19:02:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.679 19:02:33 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:42.679 19:02:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:42.679 19:02:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.679 19:02:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.679 ************************************ 00:15:42.679 START TEST raid5f_state_function_test_sb 00:15:42.679 ************************************ 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80911 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80911' 00:15:42.679 Process raid pid: 80911 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80911 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80911 ']' 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.679 19:02:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.679 [2024-11-26 19:02:33.914419] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:15:42.680 [2024-11-26 19:02:33.914641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.938 [2024-11-26 19:02:34.113997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.938 [2024-11-26 19:02:34.279158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.197 [2024-11-26 19:02:34.511120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.197 [2024-11-26 19:02:34.511188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.765 [2024-11-26 19:02:34.918678] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.765 [2024-11-26 19:02:34.918791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.765 [2024-11-26 19:02:34.918810] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.765 [2024-11-26 19:02:34.918829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.765 [2024-11-26 19:02:34.918844] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.765 [2024-11-26 19:02:34.918860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.765 "name": "Existed_Raid", 00:15:43.765 "uuid": "c377589b-d9bb-44b5-a91d-fc1c33a121b1", 00:15:43.765 "strip_size_kb": 64, 00:15:43.765 "state": "configuring", 00:15:43.765 "raid_level": "raid5f", 00:15:43.765 "superblock": true, 00:15:43.765 "num_base_bdevs": 3, 00:15:43.765 "num_base_bdevs_discovered": 0, 00:15:43.765 "num_base_bdevs_operational": 3, 00:15:43.765 "base_bdevs_list": [ 00:15:43.765 { 00:15:43.765 "name": "BaseBdev1", 00:15:43.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.765 "is_configured": false, 00:15:43.765 "data_offset": 0, 00:15:43.765 "data_size": 0 00:15:43.765 }, 00:15:43.765 { 00:15:43.765 "name": "BaseBdev2", 00:15:43.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.765 "is_configured": false, 00:15:43.765 "data_offset": 0, 00:15:43.765 "data_size": 0 00:15:43.765 }, 00:15:43.765 { 00:15:43.765 "name": "BaseBdev3", 00:15:43.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.765 "is_configured": false, 00:15:43.765 "data_offset": 0, 00:15:43.765 "data_size": 0 00:15:43.765 } 00:15:43.765 ] 00:15:43.765 }' 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.765 19:02:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.333 [2024-11-26 19:02:35.418889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.333 [2024-11-26 19:02:35.418978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.333 [2024-11-26 19:02:35.430782] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.333 [2024-11-26 19:02:35.430842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.333 [2024-11-26 19:02:35.430859] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.333 [2024-11-26 19:02:35.430885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.333 [2024-11-26 19:02:35.430914] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.333 [2024-11-26 19:02:35.430935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.333 [2024-11-26 19:02:35.479128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.333 BaseBdev1 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.333 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.333 [ 00:15:44.333 { 00:15:44.333 "name": "BaseBdev1", 00:15:44.333 "aliases": [ 00:15:44.333 "e4b394f9-ff82-49e0-9eb5-98d14a532061" 00:15:44.333 ], 00:15:44.333 "product_name": "Malloc disk", 00:15:44.333 "block_size": 512, 00:15:44.333 "num_blocks": 65536, 00:15:44.333 "uuid": "e4b394f9-ff82-49e0-9eb5-98d14a532061", 00:15:44.333 "assigned_rate_limits": { 00:15:44.333 "rw_ios_per_sec": 0, 00:15:44.333 "rw_mbytes_per_sec": 0, 00:15:44.333 "r_mbytes_per_sec": 0, 00:15:44.333 "w_mbytes_per_sec": 0 00:15:44.333 }, 00:15:44.333 "claimed": true, 00:15:44.333 "claim_type": "exclusive_write", 00:15:44.333 "zoned": false, 00:15:44.333 "supported_io_types": { 00:15:44.333 "read": true, 00:15:44.333 "write": true, 00:15:44.333 "unmap": true, 00:15:44.333 "flush": true, 00:15:44.333 "reset": true, 00:15:44.333 "nvme_admin": false, 00:15:44.333 "nvme_io": false, 00:15:44.333 "nvme_io_md": false, 00:15:44.333 "write_zeroes": true, 00:15:44.333 "zcopy": true, 00:15:44.333 "get_zone_info": false, 00:15:44.334 "zone_management": false, 00:15:44.334 "zone_append": false, 00:15:44.334 "compare": false, 00:15:44.334 "compare_and_write": false, 00:15:44.334 "abort": true, 00:15:44.334 "seek_hole": false, 00:15:44.334 "seek_data": false, 00:15:44.334 "copy": true, 00:15:44.334 "nvme_iov_md": false 00:15:44.334 }, 00:15:44.334 "memory_domains": [ 00:15:44.334 { 00:15:44.334 "dma_device_id": "system", 00:15:44.334 "dma_device_type": 1 00:15:44.334 }, 00:15:44.334 { 00:15:44.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.334 "dma_device_type": 2 00:15:44.334 } 00:15:44.334 ], 00:15:44.334 "driver_specific": {} 00:15:44.334 } 00:15:44.334 ] 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.334 "name": "Existed_Raid", 00:15:44.334 "uuid": "5feed741-5883-4537-8d07-46a21c426823", 00:15:44.334 "strip_size_kb": 64, 00:15:44.334 "state": "configuring", 00:15:44.334 "raid_level": "raid5f", 00:15:44.334 "superblock": true, 00:15:44.334 "num_base_bdevs": 3, 00:15:44.334 "num_base_bdevs_discovered": 1, 00:15:44.334 "num_base_bdevs_operational": 3, 00:15:44.334 "base_bdevs_list": [ 00:15:44.334 { 00:15:44.334 "name": "BaseBdev1", 00:15:44.334 "uuid": "e4b394f9-ff82-49e0-9eb5-98d14a532061", 00:15:44.334 "is_configured": true, 00:15:44.334 "data_offset": 2048, 00:15:44.334 "data_size": 63488 00:15:44.334 }, 00:15:44.334 { 00:15:44.334 "name": "BaseBdev2", 00:15:44.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.334 "is_configured": false, 00:15:44.334 "data_offset": 0, 00:15:44.334 "data_size": 0 00:15:44.334 }, 00:15:44.334 { 00:15:44.334 "name": "BaseBdev3", 00:15:44.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.334 "is_configured": false, 00:15:44.334 "data_offset": 0, 00:15:44.334 "data_size": 0 00:15:44.334 } 00:15:44.334 ] 00:15:44.334 }' 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.334 19:02:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.902 [2024-11-26 19:02:36.067389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.902 [2024-11-26 19:02:36.067475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.902 [2024-11-26 19:02:36.079440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.902 [2024-11-26 19:02:36.082023] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.902 [2024-11-26 19:02:36.082081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.902 [2024-11-26 19:02:36.082099] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.902 [2024-11-26 19:02:36.082117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.902 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.902 "name": "Existed_Raid", 00:15:44.902 "uuid": "3a3b610d-38e6-4e7d-ab7a-a526e4f780e1", 00:15:44.902 "strip_size_kb": 64, 00:15:44.902 "state": "configuring", 00:15:44.902 "raid_level": "raid5f", 00:15:44.902 "superblock": true, 00:15:44.902 "num_base_bdevs": 3, 00:15:44.902 "num_base_bdevs_discovered": 1, 00:15:44.902 "num_base_bdevs_operational": 3, 00:15:44.902 "base_bdevs_list": [ 00:15:44.902 { 00:15:44.902 "name": "BaseBdev1", 00:15:44.902 "uuid": "e4b394f9-ff82-49e0-9eb5-98d14a532061", 00:15:44.902 "is_configured": true, 00:15:44.902 "data_offset": 2048, 00:15:44.902 "data_size": 63488 00:15:44.902 }, 00:15:44.902 { 00:15:44.902 "name": "BaseBdev2", 00:15:44.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.902 "is_configured": false, 00:15:44.902 "data_offset": 0, 00:15:44.902 "data_size": 0 00:15:44.902 }, 00:15:44.902 { 00:15:44.902 "name": "BaseBdev3", 00:15:44.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.902 "is_configured": false, 00:15:44.903 "data_offset": 0, 00:15:44.903 "data_size": 0 00:15:44.903 } 00:15:44.903 ] 00:15:44.903 }' 00:15:44.903 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.903 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.470 [2024-11-26 19:02:36.645432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.470 BaseBdev2 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.470 [ 00:15:45.470 { 00:15:45.470 "name": "BaseBdev2", 00:15:45.470 "aliases": [ 00:15:45.470 "7c83091b-f1e1-4ef9-862a-e83f7eeba405" 00:15:45.470 ], 00:15:45.470 "product_name": "Malloc disk", 00:15:45.470 "block_size": 512, 00:15:45.470 "num_blocks": 65536, 00:15:45.470 "uuid": "7c83091b-f1e1-4ef9-862a-e83f7eeba405", 00:15:45.470 "assigned_rate_limits": { 00:15:45.470 "rw_ios_per_sec": 0, 00:15:45.470 "rw_mbytes_per_sec": 0, 00:15:45.470 "r_mbytes_per_sec": 0, 00:15:45.470 "w_mbytes_per_sec": 0 00:15:45.470 }, 00:15:45.470 "claimed": true, 00:15:45.470 "claim_type": "exclusive_write", 00:15:45.470 "zoned": false, 00:15:45.470 "supported_io_types": { 00:15:45.470 "read": true, 00:15:45.470 "write": true, 00:15:45.470 "unmap": true, 00:15:45.470 "flush": true, 00:15:45.470 "reset": true, 00:15:45.470 "nvme_admin": false, 00:15:45.470 "nvme_io": false, 00:15:45.470 "nvme_io_md": false, 00:15:45.470 "write_zeroes": true, 00:15:45.470 "zcopy": true, 00:15:45.470 "get_zone_info": false, 00:15:45.470 "zone_management": false, 00:15:45.470 "zone_append": false, 00:15:45.470 "compare": false, 00:15:45.470 "compare_and_write": false, 00:15:45.470 "abort": true, 00:15:45.470 "seek_hole": false, 00:15:45.470 "seek_data": false, 00:15:45.470 "copy": true, 00:15:45.470 "nvme_iov_md": false 00:15:45.470 }, 00:15:45.470 "memory_domains": [ 00:15:45.470 { 00:15:45.470 "dma_device_id": "system", 00:15:45.470 "dma_device_type": 1 00:15:45.470 }, 00:15:45.470 { 00:15:45.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.470 "dma_device_type": 2 00:15:45.470 } 00:15:45.470 ], 00:15:45.470 "driver_specific": {} 00:15:45.470 } 00:15:45.470 ] 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.470 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.471 "name": "Existed_Raid", 00:15:45.471 "uuid": "3a3b610d-38e6-4e7d-ab7a-a526e4f780e1", 00:15:45.471 "strip_size_kb": 64, 00:15:45.471 "state": "configuring", 00:15:45.471 "raid_level": "raid5f", 00:15:45.471 "superblock": true, 00:15:45.471 "num_base_bdevs": 3, 00:15:45.471 "num_base_bdevs_discovered": 2, 00:15:45.471 "num_base_bdevs_operational": 3, 00:15:45.471 "base_bdevs_list": [ 00:15:45.471 { 00:15:45.471 "name": "BaseBdev1", 00:15:45.471 "uuid": "e4b394f9-ff82-49e0-9eb5-98d14a532061", 00:15:45.471 "is_configured": true, 00:15:45.471 "data_offset": 2048, 00:15:45.471 "data_size": 63488 00:15:45.471 }, 00:15:45.471 { 00:15:45.471 "name": "BaseBdev2", 00:15:45.471 "uuid": "7c83091b-f1e1-4ef9-862a-e83f7eeba405", 00:15:45.471 "is_configured": true, 00:15:45.471 "data_offset": 2048, 00:15:45.471 "data_size": 63488 00:15:45.471 }, 00:15:45.471 { 00:15:45.471 "name": "BaseBdev3", 00:15:45.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.471 "is_configured": false, 00:15:45.471 "data_offset": 0, 00:15:45.471 "data_size": 0 00:15:45.471 } 00:15:45.471 ] 00:15:45.471 }' 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.471 19:02:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.039 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.039 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.039 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.039 [2024-11-26 19:02:37.259324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.039 [2024-11-26 19:02:37.259667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:46.039 [2024-11-26 19:02:37.259698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:46.039 BaseBdev3 00:15:46.040 [2024-11-26 19:02:37.260099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 [2024-11-26 19:02:37.264775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:46.040 [2024-11-26 19:02:37.265113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:46.040 [2024-11-26 19:02:37.265492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 [ 00:15:46.040 { 00:15:46.040 "name": "BaseBdev3", 00:15:46.040 "aliases": [ 00:15:46.040 "67a02552-01d8-484d-80f1-2ff039af72b9" 00:15:46.040 ], 00:15:46.040 "product_name": "Malloc disk", 00:15:46.040 "block_size": 512, 00:15:46.040 "num_blocks": 65536, 00:15:46.040 "uuid": "67a02552-01d8-484d-80f1-2ff039af72b9", 00:15:46.040 "assigned_rate_limits": { 00:15:46.040 "rw_ios_per_sec": 0, 00:15:46.040 "rw_mbytes_per_sec": 0, 00:15:46.040 "r_mbytes_per_sec": 0, 00:15:46.040 "w_mbytes_per_sec": 0 00:15:46.040 }, 00:15:46.040 "claimed": true, 00:15:46.040 "claim_type": "exclusive_write", 00:15:46.040 "zoned": false, 00:15:46.040 "supported_io_types": { 00:15:46.040 "read": true, 00:15:46.040 "write": true, 00:15:46.040 "unmap": true, 00:15:46.040 "flush": true, 00:15:46.040 "reset": true, 00:15:46.040 "nvme_admin": false, 00:15:46.040 "nvme_io": false, 00:15:46.040 "nvme_io_md": false, 00:15:46.040 "write_zeroes": true, 00:15:46.040 "zcopy": true, 00:15:46.040 "get_zone_info": false, 00:15:46.040 "zone_management": false, 00:15:46.040 "zone_append": false, 00:15:46.040 "compare": false, 00:15:46.040 "compare_and_write": false, 00:15:46.040 "abort": true, 00:15:46.040 "seek_hole": false, 00:15:46.040 "seek_data": false, 00:15:46.040 "copy": true, 00:15:46.040 "nvme_iov_md": false 00:15:46.040 }, 00:15:46.040 "memory_domains": [ 00:15:46.040 { 00:15:46.040 "dma_device_id": "system", 00:15:46.040 "dma_device_type": 1 00:15:46.040 }, 00:15:46.040 { 00:15:46.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.040 "dma_device_type": 2 00:15:46.040 } 00:15:46.040 ], 00:15:46.040 "driver_specific": {} 00:15:46.040 } 00:15:46.040 ] 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.040 "name": "Existed_Raid", 00:15:46.040 "uuid": "3a3b610d-38e6-4e7d-ab7a-a526e4f780e1", 00:15:46.040 "strip_size_kb": 64, 00:15:46.040 "state": "online", 00:15:46.040 "raid_level": "raid5f", 00:15:46.040 "superblock": true, 00:15:46.040 "num_base_bdevs": 3, 00:15:46.040 "num_base_bdevs_discovered": 3, 00:15:46.040 "num_base_bdevs_operational": 3, 00:15:46.040 "base_bdevs_list": [ 00:15:46.040 { 00:15:46.040 "name": "BaseBdev1", 00:15:46.040 "uuid": "e4b394f9-ff82-49e0-9eb5-98d14a532061", 00:15:46.040 "is_configured": true, 00:15:46.040 "data_offset": 2048, 00:15:46.040 "data_size": 63488 00:15:46.040 }, 00:15:46.040 { 00:15:46.040 "name": "BaseBdev2", 00:15:46.040 "uuid": "7c83091b-f1e1-4ef9-862a-e83f7eeba405", 00:15:46.040 "is_configured": true, 00:15:46.040 "data_offset": 2048, 00:15:46.040 "data_size": 63488 00:15:46.040 }, 00:15:46.040 { 00:15:46.040 "name": "BaseBdev3", 00:15:46.040 "uuid": "67a02552-01d8-484d-80f1-2ff039af72b9", 00:15:46.040 "is_configured": true, 00:15:46.040 "data_offset": 2048, 00:15:46.040 "data_size": 63488 00:15:46.040 } 00:15:46.040 ] 00:15:46.040 }' 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.040 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.608 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:46.608 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:46.608 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.608 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.608 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.609 [2024-11-26 19:02:37.847417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.609 "name": "Existed_Raid", 00:15:46.609 "aliases": [ 00:15:46.609 "3a3b610d-38e6-4e7d-ab7a-a526e4f780e1" 00:15:46.609 ], 00:15:46.609 "product_name": "Raid Volume", 00:15:46.609 "block_size": 512, 00:15:46.609 "num_blocks": 126976, 00:15:46.609 "uuid": "3a3b610d-38e6-4e7d-ab7a-a526e4f780e1", 00:15:46.609 "assigned_rate_limits": { 00:15:46.609 "rw_ios_per_sec": 0, 00:15:46.609 "rw_mbytes_per_sec": 0, 00:15:46.609 "r_mbytes_per_sec": 0, 00:15:46.609 "w_mbytes_per_sec": 0 00:15:46.609 }, 00:15:46.609 "claimed": false, 00:15:46.609 "zoned": false, 00:15:46.609 "supported_io_types": { 00:15:46.609 "read": true, 00:15:46.609 "write": true, 00:15:46.609 "unmap": false, 00:15:46.609 "flush": false, 00:15:46.609 "reset": true, 00:15:46.609 "nvme_admin": false, 00:15:46.609 "nvme_io": false, 00:15:46.609 "nvme_io_md": false, 00:15:46.609 "write_zeroes": true, 00:15:46.609 "zcopy": false, 00:15:46.609 "get_zone_info": false, 00:15:46.609 "zone_management": false, 00:15:46.609 "zone_append": false, 00:15:46.609 "compare": false, 00:15:46.609 "compare_and_write": false, 00:15:46.609 "abort": false, 00:15:46.609 "seek_hole": false, 00:15:46.609 "seek_data": false, 00:15:46.609 "copy": false, 00:15:46.609 "nvme_iov_md": false 00:15:46.609 }, 00:15:46.609 "driver_specific": { 00:15:46.609 "raid": { 00:15:46.609 "uuid": "3a3b610d-38e6-4e7d-ab7a-a526e4f780e1", 00:15:46.609 "strip_size_kb": 64, 00:15:46.609 "state": "online", 00:15:46.609 "raid_level": "raid5f", 00:15:46.609 "superblock": true, 00:15:46.609 "num_base_bdevs": 3, 00:15:46.609 "num_base_bdevs_discovered": 3, 00:15:46.609 "num_base_bdevs_operational": 3, 00:15:46.609 "base_bdevs_list": [ 00:15:46.609 { 00:15:46.609 "name": "BaseBdev1", 00:15:46.609 "uuid": "e4b394f9-ff82-49e0-9eb5-98d14a532061", 00:15:46.609 "is_configured": true, 00:15:46.609 "data_offset": 2048, 00:15:46.609 "data_size": 63488 00:15:46.609 }, 00:15:46.609 { 00:15:46.609 "name": "BaseBdev2", 00:15:46.609 "uuid": "7c83091b-f1e1-4ef9-862a-e83f7eeba405", 00:15:46.609 "is_configured": true, 00:15:46.609 "data_offset": 2048, 00:15:46.609 "data_size": 63488 00:15:46.609 }, 00:15:46.609 { 00:15:46.609 "name": "BaseBdev3", 00:15:46.609 "uuid": "67a02552-01d8-484d-80f1-2ff039af72b9", 00:15:46.609 "is_configured": true, 00:15:46.609 "data_offset": 2048, 00:15:46.609 "data_size": 63488 00:15:46.609 } 00:15:46.609 ] 00:15:46.609 } 00:15:46.609 } 00:15:46.609 }' 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:46.609 BaseBdev2 00:15:46.609 BaseBdev3' 00:15:46.609 19:02:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.867 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:46.867 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.868 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.868 [2024-11-26 19:02:38.183181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.127 "name": "Existed_Raid", 00:15:47.127 "uuid": "3a3b610d-38e6-4e7d-ab7a-a526e4f780e1", 00:15:47.127 "strip_size_kb": 64, 00:15:47.127 "state": "online", 00:15:47.127 "raid_level": "raid5f", 00:15:47.127 "superblock": true, 00:15:47.127 "num_base_bdevs": 3, 00:15:47.127 "num_base_bdevs_discovered": 2, 00:15:47.127 "num_base_bdevs_operational": 2, 00:15:47.127 "base_bdevs_list": [ 00:15:47.127 { 00:15:47.127 "name": null, 00:15:47.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.127 "is_configured": false, 00:15:47.127 "data_offset": 0, 00:15:47.127 "data_size": 63488 00:15:47.127 }, 00:15:47.127 { 00:15:47.127 "name": "BaseBdev2", 00:15:47.127 "uuid": "7c83091b-f1e1-4ef9-862a-e83f7eeba405", 00:15:47.127 "is_configured": true, 00:15:47.127 "data_offset": 2048, 00:15:47.127 "data_size": 63488 00:15:47.127 }, 00:15:47.127 { 00:15:47.127 "name": "BaseBdev3", 00:15:47.127 "uuid": "67a02552-01d8-484d-80f1-2ff039af72b9", 00:15:47.127 "is_configured": true, 00:15:47.127 "data_offset": 2048, 00:15:47.127 "data_size": 63488 00:15:47.127 } 00:15:47.127 ] 00:15:47.127 }' 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.127 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.695 [2024-11-26 19:02:38.881107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.695 [2024-11-26 19:02:38.881351] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.695 [2024-11-26 19:02:38.967806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.695 19:02:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.695 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:47.695 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.695 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:47.695 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.695 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.695 [2024-11-26 19:02:39.031827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:47.695 [2024-11-26 19:02:39.031900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.956 BaseBdev2 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.956 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.956 [ 00:15:47.956 { 00:15:47.956 "name": "BaseBdev2", 00:15:47.956 "aliases": [ 00:15:47.957 "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e" 00:15:47.957 ], 00:15:47.957 "product_name": "Malloc disk", 00:15:47.957 "block_size": 512, 00:15:47.957 "num_blocks": 65536, 00:15:47.957 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:47.957 "assigned_rate_limits": { 00:15:47.957 "rw_ios_per_sec": 0, 00:15:47.957 "rw_mbytes_per_sec": 0, 00:15:47.957 "r_mbytes_per_sec": 0, 00:15:47.957 "w_mbytes_per_sec": 0 00:15:47.957 }, 00:15:47.957 "claimed": false, 00:15:47.957 "zoned": false, 00:15:47.957 "supported_io_types": { 00:15:47.957 "read": true, 00:15:47.957 "write": true, 00:15:47.957 "unmap": true, 00:15:47.957 "flush": true, 00:15:47.957 "reset": true, 00:15:47.957 "nvme_admin": false, 00:15:47.957 "nvme_io": false, 00:15:47.957 "nvme_io_md": false, 00:15:47.957 "write_zeroes": true, 00:15:47.957 "zcopy": true, 00:15:47.957 "get_zone_info": false, 00:15:47.957 "zone_management": false, 00:15:47.957 "zone_append": false, 00:15:47.957 "compare": false, 00:15:47.957 "compare_and_write": false, 00:15:47.957 "abort": true, 00:15:47.957 "seek_hole": false, 00:15:47.957 "seek_data": false, 00:15:47.957 "copy": true, 00:15:47.957 "nvme_iov_md": false 00:15:47.957 }, 00:15:47.957 "memory_domains": [ 00:15:47.957 { 00:15:47.957 "dma_device_id": "system", 00:15:47.957 "dma_device_type": 1 00:15:47.957 }, 00:15:47.957 { 00:15:47.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.957 "dma_device_type": 2 00:15:47.957 } 00:15:47.957 ], 00:15:47.957 "driver_specific": {} 00:15:47.957 } 00:15:47.957 ] 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.957 BaseBdev3 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.957 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.216 [ 00:15:48.216 { 00:15:48.216 "name": "BaseBdev3", 00:15:48.216 "aliases": [ 00:15:48.216 "da7c8d89-18a1-4579-ab7b-b1bca0dd7729" 00:15:48.216 ], 00:15:48.216 "product_name": "Malloc disk", 00:15:48.216 "block_size": 512, 00:15:48.216 "num_blocks": 65536, 00:15:48.216 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:48.216 "assigned_rate_limits": { 00:15:48.216 "rw_ios_per_sec": 0, 00:15:48.216 "rw_mbytes_per_sec": 0, 00:15:48.216 "r_mbytes_per_sec": 0, 00:15:48.216 "w_mbytes_per_sec": 0 00:15:48.216 }, 00:15:48.216 "claimed": false, 00:15:48.216 "zoned": false, 00:15:48.216 "supported_io_types": { 00:15:48.216 "read": true, 00:15:48.216 "write": true, 00:15:48.216 "unmap": true, 00:15:48.216 "flush": true, 00:15:48.216 "reset": true, 00:15:48.216 "nvme_admin": false, 00:15:48.216 "nvme_io": false, 00:15:48.216 "nvme_io_md": false, 00:15:48.216 "write_zeroes": true, 00:15:48.216 "zcopy": true, 00:15:48.216 "get_zone_info": false, 00:15:48.216 "zone_management": false, 00:15:48.216 "zone_append": false, 00:15:48.216 "compare": false, 00:15:48.216 "compare_and_write": false, 00:15:48.216 "abort": true, 00:15:48.216 "seek_hole": false, 00:15:48.216 "seek_data": false, 00:15:48.216 "copy": true, 00:15:48.216 "nvme_iov_md": false 00:15:48.216 }, 00:15:48.216 "memory_domains": [ 00:15:48.216 { 00:15:48.216 "dma_device_id": "system", 00:15:48.216 "dma_device_type": 1 00:15:48.216 }, 00:15:48.216 { 00:15:48.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.216 "dma_device_type": 2 00:15:48.216 } 00:15:48.216 ], 00:15:48.216 "driver_specific": {} 00:15:48.216 } 00:15:48.216 ] 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.216 [2024-11-26 19:02:39.340081] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.216 [2024-11-26 19:02:39.340270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.216 [2024-11-26 19:02:39.340324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.216 [2024-11-26 19:02:39.342861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.216 "name": "Existed_Raid", 00:15:48.216 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:48.216 "strip_size_kb": 64, 00:15:48.216 "state": "configuring", 00:15:48.216 "raid_level": "raid5f", 00:15:48.216 "superblock": true, 00:15:48.216 "num_base_bdevs": 3, 00:15:48.216 "num_base_bdevs_discovered": 2, 00:15:48.216 "num_base_bdevs_operational": 3, 00:15:48.216 "base_bdevs_list": [ 00:15:48.216 { 00:15:48.216 "name": "BaseBdev1", 00:15:48.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.216 "is_configured": false, 00:15:48.216 "data_offset": 0, 00:15:48.216 "data_size": 0 00:15:48.216 }, 00:15:48.216 { 00:15:48.216 "name": "BaseBdev2", 00:15:48.216 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:48.216 "is_configured": true, 00:15:48.216 "data_offset": 2048, 00:15:48.216 "data_size": 63488 00:15:48.216 }, 00:15:48.216 { 00:15:48.216 "name": "BaseBdev3", 00:15:48.216 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:48.216 "is_configured": true, 00:15:48.216 "data_offset": 2048, 00:15:48.216 "data_size": 63488 00:15:48.216 } 00:15:48.216 ] 00:15:48.216 }' 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.216 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.786 [2024-11-26 19:02:39.940329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.786 19:02:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.786 "name": "Existed_Raid", 00:15:48.786 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:48.786 "strip_size_kb": 64, 00:15:48.786 "state": "configuring", 00:15:48.786 "raid_level": "raid5f", 00:15:48.786 "superblock": true, 00:15:48.786 "num_base_bdevs": 3, 00:15:48.786 "num_base_bdevs_discovered": 1, 00:15:48.786 "num_base_bdevs_operational": 3, 00:15:48.786 "base_bdevs_list": [ 00:15:48.786 { 00:15:48.786 "name": "BaseBdev1", 00:15:48.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.786 "is_configured": false, 00:15:48.786 "data_offset": 0, 00:15:48.786 "data_size": 0 00:15:48.786 }, 00:15:48.786 { 00:15:48.787 "name": null, 00:15:48.787 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:48.787 "is_configured": false, 00:15:48.787 "data_offset": 0, 00:15:48.787 "data_size": 63488 00:15:48.787 }, 00:15:48.787 { 00:15:48.787 "name": "BaseBdev3", 00:15:48.787 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:48.787 "is_configured": true, 00:15:48.787 "data_offset": 2048, 00:15:48.787 "data_size": 63488 00:15:48.787 } 00:15:48.787 ] 00:15:48.787 }' 00:15:48.787 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.787 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.358 [2024-11-26 19:02:40.571868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.358 BaseBdev1 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.358 [ 00:15:49.358 { 00:15:49.358 "name": "BaseBdev1", 00:15:49.358 "aliases": [ 00:15:49.358 "42a41352-408d-4892-9353-02021781807e" 00:15:49.358 ], 00:15:49.358 "product_name": "Malloc disk", 00:15:49.358 "block_size": 512, 00:15:49.358 "num_blocks": 65536, 00:15:49.358 "uuid": "42a41352-408d-4892-9353-02021781807e", 00:15:49.358 "assigned_rate_limits": { 00:15:49.358 "rw_ios_per_sec": 0, 00:15:49.358 "rw_mbytes_per_sec": 0, 00:15:49.358 "r_mbytes_per_sec": 0, 00:15:49.358 "w_mbytes_per_sec": 0 00:15:49.358 }, 00:15:49.358 "claimed": true, 00:15:49.358 "claim_type": "exclusive_write", 00:15:49.358 "zoned": false, 00:15:49.358 "supported_io_types": { 00:15:49.358 "read": true, 00:15:49.358 "write": true, 00:15:49.358 "unmap": true, 00:15:49.358 "flush": true, 00:15:49.358 "reset": true, 00:15:49.358 "nvme_admin": false, 00:15:49.358 "nvme_io": false, 00:15:49.358 "nvme_io_md": false, 00:15:49.358 "write_zeroes": true, 00:15:49.358 "zcopy": true, 00:15:49.358 "get_zone_info": false, 00:15:49.358 "zone_management": false, 00:15:49.358 "zone_append": false, 00:15:49.358 "compare": false, 00:15:49.358 "compare_and_write": false, 00:15:49.358 "abort": true, 00:15:49.358 "seek_hole": false, 00:15:49.358 "seek_data": false, 00:15:49.358 "copy": true, 00:15:49.358 "nvme_iov_md": false 00:15:49.358 }, 00:15:49.358 "memory_domains": [ 00:15:49.358 { 00:15:49.358 "dma_device_id": "system", 00:15:49.358 "dma_device_type": 1 00:15:49.358 }, 00:15:49.358 { 00:15:49.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.358 "dma_device_type": 2 00:15:49.358 } 00:15:49.358 ], 00:15:49.358 "driver_specific": {} 00:15:49.358 } 00:15:49.358 ] 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.358 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.359 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.359 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.359 "name": "Existed_Raid", 00:15:49.359 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:49.359 "strip_size_kb": 64, 00:15:49.359 "state": "configuring", 00:15:49.359 "raid_level": "raid5f", 00:15:49.359 "superblock": true, 00:15:49.359 "num_base_bdevs": 3, 00:15:49.359 "num_base_bdevs_discovered": 2, 00:15:49.359 "num_base_bdevs_operational": 3, 00:15:49.359 "base_bdevs_list": [ 00:15:49.359 { 00:15:49.359 "name": "BaseBdev1", 00:15:49.359 "uuid": "42a41352-408d-4892-9353-02021781807e", 00:15:49.359 "is_configured": true, 00:15:49.359 "data_offset": 2048, 00:15:49.359 "data_size": 63488 00:15:49.359 }, 00:15:49.359 { 00:15:49.359 "name": null, 00:15:49.359 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:49.359 "is_configured": false, 00:15:49.359 "data_offset": 0, 00:15:49.359 "data_size": 63488 00:15:49.359 }, 00:15:49.359 { 00:15:49.359 "name": "BaseBdev3", 00:15:49.359 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:49.359 "is_configured": true, 00:15:49.359 "data_offset": 2048, 00:15:49.359 "data_size": 63488 00:15:49.359 } 00:15:49.359 ] 00:15:49.359 }' 00:15:49.359 19:02:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.359 19:02:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.933 [2024-11-26 19:02:41.212145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.933 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.934 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.934 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.934 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.934 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.934 "name": "Existed_Raid", 00:15:49.934 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:49.934 "strip_size_kb": 64, 00:15:49.934 "state": "configuring", 00:15:49.934 "raid_level": "raid5f", 00:15:49.934 "superblock": true, 00:15:49.934 "num_base_bdevs": 3, 00:15:49.934 "num_base_bdevs_discovered": 1, 00:15:49.934 "num_base_bdevs_operational": 3, 00:15:49.934 "base_bdevs_list": [ 00:15:49.934 { 00:15:49.934 "name": "BaseBdev1", 00:15:49.934 "uuid": "42a41352-408d-4892-9353-02021781807e", 00:15:49.934 "is_configured": true, 00:15:49.934 "data_offset": 2048, 00:15:49.934 "data_size": 63488 00:15:49.934 }, 00:15:49.934 { 00:15:49.934 "name": null, 00:15:49.934 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:49.934 "is_configured": false, 00:15:49.934 "data_offset": 0, 00:15:49.934 "data_size": 63488 00:15:49.934 }, 00:15:49.934 { 00:15:49.934 "name": null, 00:15:49.934 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:49.934 "is_configured": false, 00:15:49.934 "data_offset": 0, 00:15:49.934 "data_size": 63488 00:15:49.934 } 00:15:49.934 ] 00:15:49.934 }' 00:15:49.934 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.934 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.502 [2024-11-26 19:02:41.808438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.502 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.762 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.762 "name": "Existed_Raid", 00:15:50.762 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:50.762 "strip_size_kb": 64, 00:15:50.762 "state": "configuring", 00:15:50.762 "raid_level": "raid5f", 00:15:50.762 "superblock": true, 00:15:50.762 "num_base_bdevs": 3, 00:15:50.762 "num_base_bdevs_discovered": 2, 00:15:50.762 "num_base_bdevs_operational": 3, 00:15:50.762 "base_bdevs_list": [ 00:15:50.762 { 00:15:50.762 "name": "BaseBdev1", 00:15:50.762 "uuid": "42a41352-408d-4892-9353-02021781807e", 00:15:50.762 "is_configured": true, 00:15:50.762 "data_offset": 2048, 00:15:50.762 "data_size": 63488 00:15:50.762 }, 00:15:50.762 { 00:15:50.762 "name": null, 00:15:50.762 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:50.762 "is_configured": false, 00:15:50.762 "data_offset": 0, 00:15:50.762 "data_size": 63488 00:15:50.762 }, 00:15:50.762 { 00:15:50.762 "name": "BaseBdev3", 00:15:50.762 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:50.762 "is_configured": true, 00:15:50.762 "data_offset": 2048, 00:15:50.762 "data_size": 63488 00:15:50.762 } 00:15:50.762 ] 00:15:50.762 }' 00:15:50.762 19:02:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.762 19:02:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.021 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.021 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:51.021 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.021 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.021 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.021 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:51.021 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:51.021 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.021 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.021 [2024-11-26 19:02:42.380679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.280 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.280 "name": "Existed_Raid", 00:15:51.280 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:51.281 "strip_size_kb": 64, 00:15:51.281 "state": "configuring", 00:15:51.281 "raid_level": "raid5f", 00:15:51.281 "superblock": true, 00:15:51.281 "num_base_bdevs": 3, 00:15:51.281 "num_base_bdevs_discovered": 1, 00:15:51.281 "num_base_bdevs_operational": 3, 00:15:51.281 "base_bdevs_list": [ 00:15:51.281 { 00:15:51.281 "name": null, 00:15:51.281 "uuid": "42a41352-408d-4892-9353-02021781807e", 00:15:51.281 "is_configured": false, 00:15:51.281 "data_offset": 0, 00:15:51.281 "data_size": 63488 00:15:51.281 }, 00:15:51.281 { 00:15:51.281 "name": null, 00:15:51.281 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:51.281 "is_configured": false, 00:15:51.281 "data_offset": 0, 00:15:51.281 "data_size": 63488 00:15:51.281 }, 00:15:51.281 { 00:15:51.281 "name": "BaseBdev3", 00:15:51.281 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:51.281 "is_configured": true, 00:15:51.281 "data_offset": 2048, 00:15:51.281 "data_size": 63488 00:15:51.281 } 00:15:51.281 ] 00:15:51.281 }' 00:15:51.281 19:02:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.281 19:02:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 [2024-11-26 19:02:43.066553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.850 "name": "Existed_Raid", 00:15:51.850 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:51.850 "strip_size_kb": 64, 00:15:51.850 "state": "configuring", 00:15:51.850 "raid_level": "raid5f", 00:15:51.850 "superblock": true, 00:15:51.850 "num_base_bdevs": 3, 00:15:51.850 "num_base_bdevs_discovered": 2, 00:15:51.850 "num_base_bdevs_operational": 3, 00:15:51.850 "base_bdevs_list": [ 00:15:51.850 { 00:15:51.850 "name": null, 00:15:51.850 "uuid": "42a41352-408d-4892-9353-02021781807e", 00:15:51.850 "is_configured": false, 00:15:51.850 "data_offset": 0, 00:15:51.850 "data_size": 63488 00:15:51.850 }, 00:15:51.850 { 00:15:51.850 "name": "BaseBdev2", 00:15:51.850 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:51.850 "is_configured": true, 00:15:51.850 "data_offset": 2048, 00:15:51.850 "data_size": 63488 00:15:51.850 }, 00:15:51.850 { 00:15:51.850 "name": "BaseBdev3", 00:15:51.850 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:51.850 "is_configured": true, 00:15:51.850 "data_offset": 2048, 00:15:51.850 "data_size": 63488 00:15:51.850 } 00:15:51.850 ] 00:15:51.850 }' 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.850 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 42a41352-408d-4892-9353-02021781807e 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.419 [2024-11-26 19:02:43.754934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:52.419 NewBaseBdev 00:15:52.419 [2024-11-26 19:02:43.755281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:52.419 [2024-11-26 19:02:43.755306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:52.419 [2024-11-26 19:02:43.755650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.419 [2024-11-26 19:02:43.760700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:52.419 [2024-11-26 19:02:43.760732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:52.419 [2024-11-26 19:02:43.760984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.419 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.419 [ 00:15:52.419 { 00:15:52.419 "name": "NewBaseBdev", 00:15:52.679 "aliases": [ 00:15:52.679 "42a41352-408d-4892-9353-02021781807e" 00:15:52.679 ], 00:15:52.679 "product_name": "Malloc disk", 00:15:52.679 "block_size": 512, 00:15:52.679 "num_blocks": 65536, 00:15:52.679 "uuid": "42a41352-408d-4892-9353-02021781807e", 00:15:52.679 "assigned_rate_limits": { 00:15:52.679 "rw_ios_per_sec": 0, 00:15:52.679 "rw_mbytes_per_sec": 0, 00:15:52.679 "r_mbytes_per_sec": 0, 00:15:52.679 "w_mbytes_per_sec": 0 00:15:52.679 }, 00:15:52.679 "claimed": true, 00:15:52.679 "claim_type": "exclusive_write", 00:15:52.679 "zoned": false, 00:15:52.679 "supported_io_types": { 00:15:52.679 "read": true, 00:15:52.679 "write": true, 00:15:52.679 "unmap": true, 00:15:52.679 "flush": true, 00:15:52.679 "reset": true, 00:15:52.679 "nvme_admin": false, 00:15:52.679 "nvme_io": false, 00:15:52.679 "nvme_io_md": false, 00:15:52.679 "write_zeroes": true, 00:15:52.679 "zcopy": true, 00:15:52.679 "get_zone_info": false, 00:15:52.679 "zone_management": false, 00:15:52.679 "zone_append": false, 00:15:52.679 "compare": false, 00:15:52.679 "compare_and_write": false, 00:15:52.679 "abort": true, 00:15:52.679 "seek_hole": false, 00:15:52.679 "seek_data": false, 00:15:52.679 "copy": true, 00:15:52.679 "nvme_iov_md": false 00:15:52.679 }, 00:15:52.679 "memory_domains": [ 00:15:52.679 { 00:15:52.679 "dma_device_id": "system", 00:15:52.679 "dma_device_type": 1 00:15:52.679 }, 00:15:52.679 { 00:15:52.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.679 "dma_device_type": 2 00:15:52.679 } 00:15:52.679 ], 00:15:52.679 "driver_specific": {} 00:15:52.679 } 00:15:52.679 ] 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.679 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.679 "name": "Existed_Raid", 00:15:52.679 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:52.679 "strip_size_kb": 64, 00:15:52.679 "state": "online", 00:15:52.679 "raid_level": "raid5f", 00:15:52.679 "superblock": true, 00:15:52.679 "num_base_bdevs": 3, 00:15:52.679 "num_base_bdevs_discovered": 3, 00:15:52.679 "num_base_bdevs_operational": 3, 00:15:52.679 "base_bdevs_list": [ 00:15:52.679 { 00:15:52.680 "name": "NewBaseBdev", 00:15:52.680 "uuid": "42a41352-408d-4892-9353-02021781807e", 00:15:52.680 "is_configured": true, 00:15:52.680 "data_offset": 2048, 00:15:52.680 "data_size": 63488 00:15:52.680 }, 00:15:52.680 { 00:15:52.680 "name": "BaseBdev2", 00:15:52.680 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:52.680 "is_configured": true, 00:15:52.680 "data_offset": 2048, 00:15:52.680 "data_size": 63488 00:15:52.680 }, 00:15:52.680 { 00:15:52.680 "name": "BaseBdev3", 00:15:52.680 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:52.680 "is_configured": true, 00:15:52.680 "data_offset": 2048, 00:15:52.680 "data_size": 63488 00:15:52.680 } 00:15:52.680 ] 00:15:52.680 }' 00:15:52.680 19:02:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.680 19:02:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.249 [2024-11-26 19:02:44.335215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.249 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.249 "name": "Existed_Raid", 00:15:53.249 "aliases": [ 00:15:53.249 "43c0df43-a75d-4532-a9a2-2e3d083ff3ba" 00:15:53.249 ], 00:15:53.249 "product_name": "Raid Volume", 00:15:53.249 "block_size": 512, 00:15:53.249 "num_blocks": 126976, 00:15:53.249 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:53.249 "assigned_rate_limits": { 00:15:53.249 "rw_ios_per_sec": 0, 00:15:53.249 "rw_mbytes_per_sec": 0, 00:15:53.249 "r_mbytes_per_sec": 0, 00:15:53.249 "w_mbytes_per_sec": 0 00:15:53.249 }, 00:15:53.249 "claimed": false, 00:15:53.249 "zoned": false, 00:15:53.249 "supported_io_types": { 00:15:53.249 "read": true, 00:15:53.249 "write": true, 00:15:53.249 "unmap": false, 00:15:53.249 "flush": false, 00:15:53.249 "reset": true, 00:15:53.250 "nvme_admin": false, 00:15:53.250 "nvme_io": false, 00:15:53.250 "nvme_io_md": false, 00:15:53.250 "write_zeroes": true, 00:15:53.250 "zcopy": false, 00:15:53.250 "get_zone_info": false, 00:15:53.250 "zone_management": false, 00:15:53.250 "zone_append": false, 00:15:53.250 "compare": false, 00:15:53.250 "compare_and_write": false, 00:15:53.250 "abort": false, 00:15:53.250 "seek_hole": false, 00:15:53.250 "seek_data": false, 00:15:53.250 "copy": false, 00:15:53.250 "nvme_iov_md": false 00:15:53.250 }, 00:15:53.250 "driver_specific": { 00:15:53.250 "raid": { 00:15:53.250 "uuid": "43c0df43-a75d-4532-a9a2-2e3d083ff3ba", 00:15:53.250 "strip_size_kb": 64, 00:15:53.250 "state": "online", 00:15:53.250 "raid_level": "raid5f", 00:15:53.250 "superblock": true, 00:15:53.250 "num_base_bdevs": 3, 00:15:53.250 "num_base_bdevs_discovered": 3, 00:15:53.250 "num_base_bdevs_operational": 3, 00:15:53.250 "base_bdevs_list": [ 00:15:53.250 { 00:15:53.250 "name": "NewBaseBdev", 00:15:53.250 "uuid": "42a41352-408d-4892-9353-02021781807e", 00:15:53.250 "is_configured": true, 00:15:53.250 "data_offset": 2048, 00:15:53.250 "data_size": 63488 00:15:53.250 }, 00:15:53.250 { 00:15:53.250 "name": "BaseBdev2", 00:15:53.250 "uuid": "77acc8f7-a9cb-4ec6-9e80-f3b507608b2e", 00:15:53.250 "is_configured": true, 00:15:53.250 "data_offset": 2048, 00:15:53.250 "data_size": 63488 00:15:53.250 }, 00:15:53.250 { 00:15:53.250 "name": "BaseBdev3", 00:15:53.250 "uuid": "da7c8d89-18a1-4579-ab7b-b1bca0dd7729", 00:15:53.250 "is_configured": true, 00:15:53.250 "data_offset": 2048, 00:15:53.250 "data_size": 63488 00:15:53.250 } 00:15:53.250 ] 00:15:53.250 } 00:15:53.250 } 00:15:53.250 }' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:53.250 BaseBdev2 00:15:53.250 BaseBdev3' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.250 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.509 [2024-11-26 19:02:44.651062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.509 [2024-11-26 19:02:44.651105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.509 [2024-11-26 19:02:44.651255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.509 [2024-11-26 19:02:44.651726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.509 [2024-11-26 19:02:44.651790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80911 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80911 ']' 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80911 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80911 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.509 killing process with pid 80911 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80911' 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80911 00:15:53.509 [2024-11-26 19:02:44.690597] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.509 19:02:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80911 00:15:53.768 [2024-11-26 19:02:44.975721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.144 19:02:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:55.144 00:15:55.144 real 0m12.300s 00:15:55.144 user 0m20.251s 00:15:55.144 sys 0m1.837s 00:15:55.144 19:02:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.144 19:02:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.144 ************************************ 00:15:55.144 END TEST raid5f_state_function_test_sb 00:15:55.144 ************************************ 00:15:55.144 19:02:46 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:55.144 19:02:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:55.144 19:02:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.144 19:02:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.144 ************************************ 00:15:55.144 START TEST raid5f_superblock_test 00:15:55.144 ************************************ 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81546 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81546 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81546 ']' 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.144 19:02:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.144 [2024-11-26 19:02:46.266836] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:15:55.144 [2024-11-26 19:02:46.267039] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81546 ] 00:15:55.144 [2024-11-26 19:02:46.452799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.401 [2024-11-26 19:02:46.585910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.658 [2024-11-26 19:02:46.791683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.658 [2024-11-26 19:02:46.791752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.915 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.174 malloc1 00:15:56.174 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.174 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.174 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.174 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.174 [2024-11-26 19:02:47.314052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.174 [2024-11-26 19:02:47.314134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.174 [2024-11-26 19:02:47.314170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:56.174 [2024-11-26 19:02:47.314186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.174 [2024-11-26 19:02:47.317220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.174 [2024-11-26 19:02:47.317265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.174 pt1 00:15:56.174 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.174 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.174 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.175 malloc2 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.175 [2024-11-26 19:02:47.370749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.175 [2024-11-26 19:02:47.370827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.175 [2024-11-26 19:02:47.370868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:56.175 [2024-11-26 19:02:47.370883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.175 [2024-11-26 19:02:47.373894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.175 [2024-11-26 19:02:47.373949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.175 pt2 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.175 malloc3 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.175 [2024-11-26 19:02:47.439068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:56.175 [2024-11-26 19:02:47.439133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.175 [2024-11-26 19:02:47.439167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:56.175 [2024-11-26 19:02:47.439184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.175 [2024-11-26 19:02:47.442031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.175 [2024-11-26 19:02:47.442075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:56.175 pt3 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.175 [2024-11-26 19:02:47.451132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:56.175 [2024-11-26 19:02:47.453635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.175 [2024-11-26 19:02:47.453768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:56.175 [2024-11-26 19:02:47.454039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:56.175 [2024-11-26 19:02:47.454078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:56.175 [2024-11-26 19:02:47.454380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:56.175 [2024-11-26 19:02:47.459655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:56.175 [2024-11-26 19:02:47.459684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:56.175 [2024-11-26 19:02:47.459999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.175 "name": "raid_bdev1", 00:15:56.175 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:15:56.175 "strip_size_kb": 64, 00:15:56.175 "state": "online", 00:15:56.175 "raid_level": "raid5f", 00:15:56.175 "superblock": true, 00:15:56.175 "num_base_bdevs": 3, 00:15:56.175 "num_base_bdevs_discovered": 3, 00:15:56.175 "num_base_bdevs_operational": 3, 00:15:56.175 "base_bdevs_list": [ 00:15:56.175 { 00:15:56.175 "name": "pt1", 00:15:56.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.175 "is_configured": true, 00:15:56.175 "data_offset": 2048, 00:15:56.175 "data_size": 63488 00:15:56.175 }, 00:15:56.175 { 00:15:56.175 "name": "pt2", 00:15:56.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.175 "is_configured": true, 00:15:56.175 "data_offset": 2048, 00:15:56.175 "data_size": 63488 00:15:56.175 }, 00:15:56.175 { 00:15:56.175 "name": "pt3", 00:15:56.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.175 "is_configured": true, 00:15:56.175 "data_offset": 2048, 00:15:56.175 "data_size": 63488 00:15:56.175 } 00:15:56.175 ] 00:15:56.175 }' 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.175 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.740 [2024-11-26 19:02:47.966164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.740 19:02:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.740 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.740 "name": "raid_bdev1", 00:15:56.740 "aliases": [ 00:15:56.740 "bb81151f-7ccb-4a18-ab22-3707a96b860a" 00:15:56.740 ], 00:15:56.740 "product_name": "Raid Volume", 00:15:56.740 "block_size": 512, 00:15:56.740 "num_blocks": 126976, 00:15:56.740 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:15:56.740 "assigned_rate_limits": { 00:15:56.740 "rw_ios_per_sec": 0, 00:15:56.740 "rw_mbytes_per_sec": 0, 00:15:56.740 "r_mbytes_per_sec": 0, 00:15:56.740 "w_mbytes_per_sec": 0 00:15:56.740 }, 00:15:56.740 "claimed": false, 00:15:56.740 "zoned": false, 00:15:56.740 "supported_io_types": { 00:15:56.740 "read": true, 00:15:56.740 "write": true, 00:15:56.740 "unmap": false, 00:15:56.740 "flush": false, 00:15:56.740 "reset": true, 00:15:56.740 "nvme_admin": false, 00:15:56.740 "nvme_io": false, 00:15:56.740 "nvme_io_md": false, 00:15:56.740 "write_zeroes": true, 00:15:56.740 "zcopy": false, 00:15:56.740 "get_zone_info": false, 00:15:56.740 "zone_management": false, 00:15:56.740 "zone_append": false, 00:15:56.740 "compare": false, 00:15:56.740 "compare_and_write": false, 00:15:56.740 "abort": false, 00:15:56.740 "seek_hole": false, 00:15:56.740 "seek_data": false, 00:15:56.740 "copy": false, 00:15:56.740 "nvme_iov_md": false 00:15:56.740 }, 00:15:56.740 "driver_specific": { 00:15:56.740 "raid": { 00:15:56.740 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:15:56.740 "strip_size_kb": 64, 00:15:56.740 "state": "online", 00:15:56.740 "raid_level": "raid5f", 00:15:56.740 "superblock": true, 00:15:56.740 "num_base_bdevs": 3, 00:15:56.740 "num_base_bdevs_discovered": 3, 00:15:56.740 "num_base_bdevs_operational": 3, 00:15:56.740 "base_bdevs_list": [ 00:15:56.740 { 00:15:56.740 "name": "pt1", 00:15:56.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.740 "is_configured": true, 00:15:56.740 "data_offset": 2048, 00:15:56.740 "data_size": 63488 00:15:56.740 }, 00:15:56.740 { 00:15:56.740 "name": "pt2", 00:15:56.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.740 "is_configured": true, 00:15:56.740 "data_offset": 2048, 00:15:56.740 "data_size": 63488 00:15:56.740 }, 00:15:56.740 { 00:15:56.740 "name": "pt3", 00:15:56.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.740 "is_configured": true, 00:15:56.740 "data_offset": 2048, 00:15:56.740 "data_size": 63488 00:15:56.740 } 00:15:56.740 ] 00:15:56.740 } 00:15:56.740 } 00:15:56.740 }' 00:15:56.740 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.740 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:56.740 pt2 00:15:56.740 pt3' 00:15:56.740 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:56.998 [2024-11-26 19:02:48.286169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bb81151f-7ccb-4a18-ab22-3707a96b860a 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bb81151f-7ccb-4a18-ab22-3707a96b860a ']' 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.998 [2024-11-26 19:02:48.337915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.998 [2024-11-26 19:02:48.337970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.998 [2024-11-26 19:02:48.338078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.998 [2024-11-26 19:02:48.338196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.998 [2024-11-26 19:02:48.338213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:56.998 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.999 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.999 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.999 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.999 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:56.999 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 [2024-11-26 19:02:48.490119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:57.258 [2024-11-26 19:02:48.492906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:57.258 [2024-11-26 19:02:48.493004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:57.258 [2024-11-26 19:02:48.493088] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:57.258 [2024-11-26 19:02:48.493164] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:57.258 [2024-11-26 19:02:48.493199] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:57.258 [2024-11-26 19:02:48.493226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.258 [2024-11-26 19:02:48.493241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:57.258 request: 00:15:57.258 { 00:15:57.258 "name": "raid_bdev1", 00:15:57.258 "raid_level": "raid5f", 00:15:57.258 "base_bdevs": [ 00:15:57.258 "malloc1", 00:15:57.258 "malloc2", 00:15:57.258 "malloc3" 00:15:57.258 ], 00:15:57.258 "strip_size_kb": 64, 00:15:57.258 "superblock": false, 00:15:57.258 "method": "bdev_raid_create", 00:15:57.258 "req_id": 1 00:15:57.258 } 00:15:57.258 Got JSON-RPC error response 00:15:57.258 response: 00:15:57.258 { 00:15:57.258 "code": -17, 00:15:57.258 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:57.258 } 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 [2024-11-26 19:02:48.570138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.258 [2024-11-26 19:02:48.570220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.258 [2024-11-26 19:02:48.570252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:57.258 [2024-11-26 19:02:48.570282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.258 [2024-11-26 19:02:48.573461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.258 [2024-11-26 19:02:48.573518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.258 [2024-11-26 19:02:48.573675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:57.258 [2024-11-26 19:02:48.573767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.258 pt1 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.516 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.517 "name": "raid_bdev1", 00:15:57.517 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:15:57.517 "strip_size_kb": 64, 00:15:57.517 "state": "configuring", 00:15:57.517 "raid_level": "raid5f", 00:15:57.517 "superblock": true, 00:15:57.517 "num_base_bdevs": 3, 00:15:57.517 "num_base_bdevs_discovered": 1, 00:15:57.517 "num_base_bdevs_operational": 3, 00:15:57.517 "base_bdevs_list": [ 00:15:57.517 { 00:15:57.517 "name": "pt1", 00:15:57.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.517 "is_configured": true, 00:15:57.517 "data_offset": 2048, 00:15:57.517 "data_size": 63488 00:15:57.517 }, 00:15:57.517 { 00:15:57.517 "name": null, 00:15:57.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.517 "is_configured": false, 00:15:57.517 "data_offset": 2048, 00:15:57.517 "data_size": 63488 00:15:57.517 }, 00:15:57.517 { 00:15:57.517 "name": null, 00:15:57.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.517 "is_configured": false, 00:15:57.517 "data_offset": 2048, 00:15:57.517 "data_size": 63488 00:15:57.517 } 00:15:57.517 ] 00:15:57.517 }' 00:15:57.517 19:02:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.517 19:02:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:57.777 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.777 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.777 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 [2024-11-26 19:02:49.114343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.777 [2024-11-26 19:02:49.114469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.777 [2024-11-26 19:02:49.114503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:57.777 [2024-11-26 19:02:49.114518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.777 [2024-11-26 19:02:49.115140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.777 [2024-11-26 19:02:49.115188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.777 [2024-11-26 19:02:49.115336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:57.777 [2024-11-26 19:02:49.115406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.777 pt2 00:15:57.777 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.777 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.777 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.777 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 [2024-11-26 19:02:49.126341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:57.777 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.778 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.036 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.036 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.036 "name": "raid_bdev1", 00:15:58.036 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:15:58.036 "strip_size_kb": 64, 00:15:58.036 "state": "configuring", 00:15:58.036 "raid_level": "raid5f", 00:15:58.036 "superblock": true, 00:15:58.036 "num_base_bdevs": 3, 00:15:58.036 "num_base_bdevs_discovered": 1, 00:15:58.036 "num_base_bdevs_operational": 3, 00:15:58.036 "base_bdevs_list": [ 00:15:58.036 { 00:15:58.036 "name": "pt1", 00:15:58.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.036 "is_configured": true, 00:15:58.036 "data_offset": 2048, 00:15:58.036 "data_size": 63488 00:15:58.036 }, 00:15:58.036 { 00:15:58.036 "name": null, 00:15:58.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.036 "is_configured": false, 00:15:58.036 "data_offset": 0, 00:15:58.036 "data_size": 63488 00:15:58.036 }, 00:15:58.036 { 00:15:58.036 "name": null, 00:15:58.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.036 "is_configured": false, 00:15:58.036 "data_offset": 2048, 00:15:58.036 "data_size": 63488 00:15:58.036 } 00:15:58.036 ] 00:15:58.036 }' 00:15:58.036 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.036 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.294 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:58.294 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:58.294 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.294 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.294 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.553 [2024-11-26 19:02:49.662504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.553 [2024-11-26 19:02:49.662622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.553 [2024-11-26 19:02:49.662651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:58.553 [2024-11-26 19:02:49.662668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.553 [2024-11-26 19:02:49.663314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.553 [2024-11-26 19:02:49.663356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.553 [2024-11-26 19:02:49.663488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:58.553 [2024-11-26 19:02:49.663541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.553 pt2 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.553 [2024-11-26 19:02:49.674499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.553 [2024-11-26 19:02:49.674601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.553 [2024-11-26 19:02:49.674625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:58.553 [2024-11-26 19:02:49.674642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.553 [2024-11-26 19:02:49.675220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.553 [2024-11-26 19:02:49.675269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.553 [2024-11-26 19:02:49.675373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:58.553 [2024-11-26 19:02:49.675426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.553 [2024-11-26 19:02:49.675619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:58.553 [2024-11-26 19:02:49.675652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:58.553 [2024-11-26 19:02:49.675998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:58.553 [2024-11-26 19:02:49.681064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:58.553 [2024-11-26 19:02:49.681093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:58.553 [2024-11-26 19:02:49.681330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.553 pt3 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.553 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.553 "name": "raid_bdev1", 00:15:58.553 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:15:58.553 "strip_size_kb": 64, 00:15:58.553 "state": "online", 00:15:58.553 "raid_level": "raid5f", 00:15:58.553 "superblock": true, 00:15:58.553 "num_base_bdevs": 3, 00:15:58.553 "num_base_bdevs_discovered": 3, 00:15:58.554 "num_base_bdevs_operational": 3, 00:15:58.554 "base_bdevs_list": [ 00:15:58.554 { 00:15:58.554 "name": "pt1", 00:15:58.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.554 "is_configured": true, 00:15:58.554 "data_offset": 2048, 00:15:58.554 "data_size": 63488 00:15:58.554 }, 00:15:58.554 { 00:15:58.554 "name": "pt2", 00:15:58.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.554 "is_configured": true, 00:15:58.554 "data_offset": 2048, 00:15:58.554 "data_size": 63488 00:15:58.554 }, 00:15:58.554 { 00:15:58.554 "name": "pt3", 00:15:58.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.554 "is_configured": true, 00:15:58.554 "data_offset": 2048, 00:15:58.554 "data_size": 63488 00:15:58.554 } 00:15:58.554 ] 00:15:58.554 }' 00:15:58.554 19:02:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.554 19:02:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.122 [2024-11-26 19:02:50.239395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.122 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.122 "name": "raid_bdev1", 00:15:59.122 "aliases": [ 00:15:59.122 "bb81151f-7ccb-4a18-ab22-3707a96b860a" 00:15:59.122 ], 00:15:59.122 "product_name": "Raid Volume", 00:15:59.122 "block_size": 512, 00:15:59.122 "num_blocks": 126976, 00:15:59.122 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:15:59.122 "assigned_rate_limits": { 00:15:59.122 "rw_ios_per_sec": 0, 00:15:59.122 "rw_mbytes_per_sec": 0, 00:15:59.122 "r_mbytes_per_sec": 0, 00:15:59.122 "w_mbytes_per_sec": 0 00:15:59.122 }, 00:15:59.122 "claimed": false, 00:15:59.122 "zoned": false, 00:15:59.122 "supported_io_types": { 00:15:59.122 "read": true, 00:15:59.122 "write": true, 00:15:59.122 "unmap": false, 00:15:59.122 "flush": false, 00:15:59.122 "reset": true, 00:15:59.122 "nvme_admin": false, 00:15:59.122 "nvme_io": false, 00:15:59.122 "nvme_io_md": false, 00:15:59.122 "write_zeroes": true, 00:15:59.122 "zcopy": false, 00:15:59.122 "get_zone_info": false, 00:15:59.122 "zone_management": false, 00:15:59.122 "zone_append": false, 00:15:59.122 "compare": false, 00:15:59.122 "compare_and_write": false, 00:15:59.122 "abort": false, 00:15:59.122 "seek_hole": false, 00:15:59.122 "seek_data": false, 00:15:59.122 "copy": false, 00:15:59.122 "nvme_iov_md": false 00:15:59.122 }, 00:15:59.123 "driver_specific": { 00:15:59.123 "raid": { 00:15:59.123 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:15:59.123 "strip_size_kb": 64, 00:15:59.123 "state": "online", 00:15:59.123 "raid_level": "raid5f", 00:15:59.123 "superblock": true, 00:15:59.123 "num_base_bdevs": 3, 00:15:59.123 "num_base_bdevs_discovered": 3, 00:15:59.123 "num_base_bdevs_operational": 3, 00:15:59.123 "base_bdevs_list": [ 00:15:59.123 { 00:15:59.123 "name": "pt1", 00:15:59.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.123 "is_configured": true, 00:15:59.123 "data_offset": 2048, 00:15:59.123 "data_size": 63488 00:15:59.123 }, 00:15:59.123 { 00:15:59.123 "name": "pt2", 00:15:59.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.123 "is_configured": true, 00:15:59.123 "data_offset": 2048, 00:15:59.123 "data_size": 63488 00:15:59.123 }, 00:15:59.123 { 00:15:59.123 "name": "pt3", 00:15:59.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.123 "is_configured": true, 00:15:59.123 "data_offset": 2048, 00:15:59.123 "data_size": 63488 00:15:59.123 } 00:15:59.123 ] 00:15:59.123 } 00:15:59.123 } 00:15:59.123 }' 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:59.123 pt2 00:15:59.123 pt3' 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.123 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.383 [2024-11-26 19:02:50.560774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bb81151f-7ccb-4a18-ab22-3707a96b860a '!=' bb81151f-7ccb-4a18-ab22-3707a96b860a ']' 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.383 [2024-11-26 19:02:50.612619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.383 "name": "raid_bdev1", 00:15:59.383 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:15:59.383 "strip_size_kb": 64, 00:15:59.383 "state": "online", 00:15:59.383 "raid_level": "raid5f", 00:15:59.383 "superblock": true, 00:15:59.383 "num_base_bdevs": 3, 00:15:59.383 "num_base_bdevs_discovered": 2, 00:15:59.383 "num_base_bdevs_operational": 2, 00:15:59.383 "base_bdevs_list": [ 00:15:59.383 { 00:15:59.383 "name": null, 00:15:59.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.383 "is_configured": false, 00:15:59.383 "data_offset": 0, 00:15:59.383 "data_size": 63488 00:15:59.383 }, 00:15:59.383 { 00:15:59.383 "name": "pt2", 00:15:59.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.383 "is_configured": true, 00:15:59.383 "data_offset": 2048, 00:15:59.383 "data_size": 63488 00:15:59.383 }, 00:15:59.383 { 00:15:59.383 "name": "pt3", 00:15:59.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.383 "is_configured": true, 00:15:59.383 "data_offset": 2048, 00:15:59.383 "data_size": 63488 00:15:59.383 } 00:15:59.383 ] 00:15:59.383 }' 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.383 19:02:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.952 [2024-11-26 19:02:51.192683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.952 [2024-11-26 19:02:51.192893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.952 [2024-11-26 19:02:51.193057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.952 [2024-11-26 19:02:51.193146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.952 [2024-11-26 19:02:51.193170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.952 [2024-11-26 19:02:51.280683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.952 [2024-11-26 19:02:51.281004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.952 [2024-11-26 19:02:51.281043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:59.952 [2024-11-26 19:02:51.281063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.952 [2024-11-26 19:02:51.284223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.952 [2024-11-26 19:02:51.284472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.952 [2024-11-26 19:02:51.284606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:59.952 [2024-11-26 19:02:51.284678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.952 pt2 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.952 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.211 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.211 "name": "raid_bdev1", 00:16:00.211 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:16:00.211 "strip_size_kb": 64, 00:16:00.211 "state": "configuring", 00:16:00.211 "raid_level": "raid5f", 00:16:00.211 "superblock": true, 00:16:00.211 "num_base_bdevs": 3, 00:16:00.211 "num_base_bdevs_discovered": 1, 00:16:00.211 "num_base_bdevs_operational": 2, 00:16:00.211 "base_bdevs_list": [ 00:16:00.211 { 00:16:00.211 "name": null, 00:16:00.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.211 "is_configured": false, 00:16:00.211 "data_offset": 2048, 00:16:00.211 "data_size": 63488 00:16:00.211 }, 00:16:00.211 { 00:16:00.211 "name": "pt2", 00:16:00.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.211 "is_configured": true, 00:16:00.211 "data_offset": 2048, 00:16:00.212 "data_size": 63488 00:16:00.212 }, 00:16:00.212 { 00:16:00.212 "name": null, 00:16:00.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.212 "is_configured": false, 00:16:00.212 "data_offset": 2048, 00:16:00.212 "data_size": 63488 00:16:00.212 } 00:16:00.212 ] 00:16:00.212 }' 00:16:00.212 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.212 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.471 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:00.471 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:00.471 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:00.471 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:00.471 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.471 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.471 [2024-11-26 19:02:51.813132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:00.471 [2024-11-26 19:02:51.813240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.471 [2024-11-26 19:02:51.813332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:00.471 [2024-11-26 19:02:51.813349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.471 [2024-11-26 19:02:51.813959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.471 [2024-11-26 19:02:51.814006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:00.471 [2024-11-26 19:02:51.814118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:00.471 [2024-11-26 19:02:51.814161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:00.471 [2024-11-26 19:02:51.814340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:00.471 [2024-11-26 19:02:51.814361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:00.472 [2024-11-26 19:02:51.814676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:00.472 [2024-11-26 19:02:51.819578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:00.472 [2024-11-26 19:02:51.819601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:00.472 [2024-11-26 19:02:51.820039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.472 pt3 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.472 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.731 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.731 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.731 "name": "raid_bdev1", 00:16:00.731 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:16:00.731 "strip_size_kb": 64, 00:16:00.731 "state": "online", 00:16:00.731 "raid_level": "raid5f", 00:16:00.731 "superblock": true, 00:16:00.731 "num_base_bdevs": 3, 00:16:00.731 "num_base_bdevs_discovered": 2, 00:16:00.731 "num_base_bdevs_operational": 2, 00:16:00.731 "base_bdevs_list": [ 00:16:00.731 { 00:16:00.731 "name": null, 00:16:00.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.731 "is_configured": false, 00:16:00.731 "data_offset": 2048, 00:16:00.731 "data_size": 63488 00:16:00.731 }, 00:16:00.731 { 00:16:00.731 "name": "pt2", 00:16:00.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.731 "is_configured": true, 00:16:00.731 "data_offset": 2048, 00:16:00.731 "data_size": 63488 00:16:00.731 }, 00:16:00.731 { 00:16:00.731 "name": "pt3", 00:16:00.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.731 "is_configured": true, 00:16:00.731 "data_offset": 2048, 00:16:00.731 "data_size": 63488 00:16:00.731 } 00:16:00.731 ] 00:16:00.731 }' 00:16:00.731 19:02:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.731 19:02:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.299 [2024-11-26 19:02:52.357932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.299 [2024-11-26 19:02:52.357985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.299 [2024-11-26 19:02:52.358091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.299 [2024-11-26 19:02:52.358182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.299 [2024-11-26 19:02:52.358198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.299 [2024-11-26 19:02:52.429982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.299 [2024-11-26 19:02:52.430059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.299 [2024-11-26 19:02:52.430089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:01.299 [2024-11-26 19:02:52.430103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.299 [2024-11-26 19:02:52.433240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.299 [2024-11-26 19:02:52.433316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.299 [2024-11-26 19:02:52.433478] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:01.299 [2024-11-26 19:02:52.433547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.299 [2024-11-26 19:02:52.433722] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:01.299 [2024-11-26 19:02:52.433740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.299 [2024-11-26 19:02:52.433762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:01.299 [2024-11-26 19:02:52.433856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.299 pt1 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.299 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.299 "name": "raid_bdev1", 00:16:01.299 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:16:01.299 "strip_size_kb": 64, 00:16:01.299 "state": "configuring", 00:16:01.299 "raid_level": "raid5f", 00:16:01.299 "superblock": true, 00:16:01.299 "num_base_bdevs": 3, 00:16:01.299 "num_base_bdevs_discovered": 1, 00:16:01.299 "num_base_bdevs_operational": 2, 00:16:01.299 "base_bdevs_list": [ 00:16:01.299 { 00:16:01.299 "name": null, 00:16:01.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.299 "is_configured": false, 00:16:01.299 "data_offset": 2048, 00:16:01.299 "data_size": 63488 00:16:01.300 }, 00:16:01.300 { 00:16:01.300 "name": "pt2", 00:16:01.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.300 "is_configured": true, 00:16:01.300 "data_offset": 2048, 00:16:01.300 "data_size": 63488 00:16:01.300 }, 00:16:01.300 { 00:16:01.300 "name": null, 00:16:01.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.300 "is_configured": false, 00:16:01.300 "data_offset": 2048, 00:16:01.300 "data_size": 63488 00:16:01.300 } 00:16:01.300 ] 00:16:01.300 }' 00:16:01.300 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.300 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.868 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:01.868 19:02:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:01.868 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.868 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.868 19:02:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.868 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:01.868 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:01.868 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.868 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.868 [2024-11-26 19:02:53.014326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:01.868 [2024-11-26 19:02:53.014421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.868 [2024-11-26 19:02:53.014453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:01.868 [2024-11-26 19:02:53.014467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.868 [2024-11-26 19:02:53.015119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.868 [2024-11-26 19:02:53.015161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:01.868 [2024-11-26 19:02:53.015284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:01.868 [2024-11-26 19:02:53.015333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:01.868 [2024-11-26 19:02:53.015508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:01.868 [2024-11-26 19:02:53.015529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:01.868 [2024-11-26 19:02:53.015859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:01.868 [2024-11-26 19:02:53.021001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:01.869 [2024-11-26 19:02:53.021153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:01.869 [2024-11-26 19:02:53.021656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.869 pt3 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.869 "name": "raid_bdev1", 00:16:01.869 "uuid": "bb81151f-7ccb-4a18-ab22-3707a96b860a", 00:16:01.869 "strip_size_kb": 64, 00:16:01.869 "state": "online", 00:16:01.869 "raid_level": "raid5f", 00:16:01.869 "superblock": true, 00:16:01.869 "num_base_bdevs": 3, 00:16:01.869 "num_base_bdevs_discovered": 2, 00:16:01.869 "num_base_bdevs_operational": 2, 00:16:01.869 "base_bdevs_list": [ 00:16:01.869 { 00:16:01.869 "name": null, 00:16:01.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.869 "is_configured": false, 00:16:01.869 "data_offset": 2048, 00:16:01.869 "data_size": 63488 00:16:01.869 }, 00:16:01.869 { 00:16:01.869 "name": "pt2", 00:16:01.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.869 "is_configured": true, 00:16:01.869 "data_offset": 2048, 00:16:01.869 "data_size": 63488 00:16:01.869 }, 00:16:01.869 { 00:16:01.869 "name": "pt3", 00:16:01.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.869 "is_configured": true, 00:16:01.869 "data_offset": 2048, 00:16:01.869 "data_size": 63488 00:16:01.869 } 00:16:01.869 ] 00:16:01.869 }' 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.869 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.437 [2024-11-26 19:02:53.620047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bb81151f-7ccb-4a18-ab22-3707a96b860a '!=' bb81151f-7ccb-4a18-ab22-3707a96b860a ']' 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81546 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81546 ']' 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81546 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81546 00:16:02.437 killing process with pid 81546 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81546' 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81546 00:16:02.437 [2024-11-26 19:02:53.696992] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.437 19:02:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81546 00:16:02.437 [2024-11-26 19:02:53.697122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.437 [2024-11-26 19:02:53.697212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.437 [2024-11-26 19:02:53.697233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:02.696 [2024-11-26 19:02:53.979640] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:04.075 ************************************ 00:16:04.075 END TEST raid5f_superblock_test 00:16:04.075 ************************************ 00:16:04.075 19:02:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:04.075 00:16:04.075 real 0m8.907s 00:16:04.075 user 0m14.495s 00:16:04.075 sys 0m1.363s 00:16:04.075 19:02:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.075 19:02:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.075 19:02:55 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:04.075 19:02:55 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:04.075 19:02:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:04.075 19:02:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.075 19:02:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.075 ************************************ 00:16:04.075 START TEST raid5f_rebuild_test 00:16:04.075 ************************************ 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81996 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81996 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81996 ']' 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.075 19:02:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.075 [2024-11-26 19:02:55.249818] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:16:04.075 [2024-11-26 19:02:55.250309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81996 ] 00:16:04.075 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:04.075 Zero copy mechanism will not be used. 00:16:04.075 [2024-11-26 19:02:55.438280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.334 [2024-11-26 19:02:55.571147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.594 [2024-11-26 19:02:55.780688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.594 [2024-11-26 19:02:55.781033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.853 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.853 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:04.853 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.853 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:04.853 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.853 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.112 BaseBdev1_malloc 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.112 [2024-11-26 19:02:56.258635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:05.112 [2024-11-26 19:02:56.258728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.112 [2024-11-26 19:02:56.258759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:05.112 [2024-11-26 19:02:56.258777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.112 [2024-11-26 19:02:56.261850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.112 [2024-11-26 19:02:56.261942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:05.112 BaseBdev1 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.112 BaseBdev2_malloc 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.112 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.113 [2024-11-26 19:02:56.313695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:05.113 [2024-11-26 19:02:56.313789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.113 [2024-11-26 19:02:56.313822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:05.113 [2024-11-26 19:02:56.313839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.113 [2024-11-26 19:02:56.316950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.113 [2024-11-26 19:02:56.317151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:05.113 BaseBdev2 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.113 BaseBdev3_malloc 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.113 [2024-11-26 19:02:56.393995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:05.113 [2024-11-26 19:02:56.394080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.113 [2024-11-26 19:02:56.394111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:05.113 [2024-11-26 19:02:56.394129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.113 [2024-11-26 19:02:56.396941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.113 [2024-11-26 19:02:56.397188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:05.113 BaseBdev3 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.113 spare_malloc 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.113 spare_delay 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.113 [2024-11-26 19:02:56.453545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:05.113 [2024-11-26 19:02:56.453628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.113 [2024-11-26 19:02:56.453657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:05.113 [2024-11-26 19:02:56.453673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.113 [2024-11-26 19:02:56.456705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.113 [2024-11-26 19:02:56.456951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:05.113 spare 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.113 [2024-11-26 19:02:56.465746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.113 [2024-11-26 19:02:56.468502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.113 [2024-11-26 19:02:56.468758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.113 [2024-11-26 19:02:56.468912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:05.113 [2024-11-26 19:02:56.468949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:05.113 [2024-11-26 19:02:56.469374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:05.113 [2024-11-26 19:02:56.474741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:05.113 [2024-11-26 19:02:56.474932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:05.113 [2024-11-26 19:02:56.475340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.113 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.373 "name": "raid_bdev1", 00:16:05.373 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:05.373 "strip_size_kb": 64, 00:16:05.373 "state": "online", 00:16:05.373 "raid_level": "raid5f", 00:16:05.373 "superblock": false, 00:16:05.373 "num_base_bdevs": 3, 00:16:05.373 "num_base_bdevs_discovered": 3, 00:16:05.373 "num_base_bdevs_operational": 3, 00:16:05.373 "base_bdevs_list": [ 00:16:05.373 { 00:16:05.373 "name": "BaseBdev1", 00:16:05.373 "uuid": "4e6ea41a-29ff-58ea-a9b5-97844df38ac5", 00:16:05.373 "is_configured": true, 00:16:05.373 "data_offset": 0, 00:16:05.373 "data_size": 65536 00:16:05.373 }, 00:16:05.373 { 00:16:05.373 "name": "BaseBdev2", 00:16:05.373 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:05.373 "is_configured": true, 00:16:05.373 "data_offset": 0, 00:16:05.373 "data_size": 65536 00:16:05.373 }, 00:16:05.373 { 00:16:05.373 "name": "BaseBdev3", 00:16:05.373 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:05.373 "is_configured": true, 00:16:05.373 "data_offset": 0, 00:16:05.373 "data_size": 65536 00:16:05.373 } 00:16:05.373 ] 00:16:05.373 }' 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.373 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.633 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:05.633 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.633 19:02:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.633 19:02:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:05.633 [2024-11-26 19:02:56.989899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.892 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:06.152 [2024-11-26 19:02:57.341869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:06.152 /dev/nbd0 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.152 1+0 records in 00:16:06.152 1+0 records out 00:16:06.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375006 s, 10.9 MB/s 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:06.152 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:06.721 512+0 records in 00:16:06.721 512+0 records out 00:16:06.721 67108864 bytes (67 MB, 64 MiB) copied, 0.494551 s, 136 MB/s 00:16:06.721 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:06.721 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.721 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:06.721 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.721 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:06.721 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.721 19:02:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.981 [2024-11-26 19:02:58.175582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 [2024-11-26 19:02:58.209549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.981 "name": "raid_bdev1", 00:16:06.981 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:06.981 "strip_size_kb": 64, 00:16:06.981 "state": "online", 00:16:06.981 "raid_level": "raid5f", 00:16:06.981 "superblock": false, 00:16:06.981 "num_base_bdevs": 3, 00:16:06.981 "num_base_bdevs_discovered": 2, 00:16:06.981 "num_base_bdevs_operational": 2, 00:16:06.981 "base_bdevs_list": [ 00:16:06.981 { 00:16:06.981 "name": null, 00:16:06.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.981 "is_configured": false, 00:16:06.981 "data_offset": 0, 00:16:06.981 "data_size": 65536 00:16:06.981 }, 00:16:06.981 { 00:16:06.981 "name": "BaseBdev2", 00:16:06.981 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:06.981 "is_configured": true, 00:16:06.981 "data_offset": 0, 00:16:06.981 "data_size": 65536 00:16:06.981 }, 00:16:06.981 { 00:16:06.981 "name": "BaseBdev3", 00:16:06.981 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:06.981 "is_configured": true, 00:16:06.981 "data_offset": 0, 00:16:06.981 "data_size": 65536 00:16:06.981 } 00:16:06.981 ] 00:16:06.981 }' 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.981 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.550 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.550 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.550 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.550 [2024-11-26 19:02:58.737751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.550 [2024-11-26 19:02:58.754606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:07.550 19:02:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.550 19:02:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:07.550 [2024-11-26 19:02:58.762644] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.486 19:02:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.487 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.487 "name": "raid_bdev1", 00:16:08.487 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:08.487 "strip_size_kb": 64, 00:16:08.487 "state": "online", 00:16:08.487 "raid_level": "raid5f", 00:16:08.487 "superblock": false, 00:16:08.487 "num_base_bdevs": 3, 00:16:08.487 "num_base_bdevs_discovered": 3, 00:16:08.487 "num_base_bdevs_operational": 3, 00:16:08.487 "process": { 00:16:08.487 "type": "rebuild", 00:16:08.487 "target": "spare", 00:16:08.487 "progress": { 00:16:08.487 "blocks": 18432, 00:16:08.487 "percent": 14 00:16:08.487 } 00:16:08.487 }, 00:16:08.487 "base_bdevs_list": [ 00:16:08.487 { 00:16:08.487 "name": "spare", 00:16:08.487 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:08.487 "is_configured": true, 00:16:08.487 "data_offset": 0, 00:16:08.487 "data_size": 65536 00:16:08.487 }, 00:16:08.487 { 00:16:08.487 "name": "BaseBdev2", 00:16:08.487 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:08.487 "is_configured": true, 00:16:08.487 "data_offset": 0, 00:16:08.487 "data_size": 65536 00:16:08.487 }, 00:16:08.487 { 00:16:08.487 "name": "BaseBdev3", 00:16:08.487 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:08.487 "is_configured": true, 00:16:08.487 "data_offset": 0, 00:16:08.487 "data_size": 65536 00:16:08.487 } 00:16:08.487 ] 00:16:08.487 }' 00:16:08.487 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.747 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.747 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.747 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.747 19:02:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:08.747 19:02:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.747 19:02:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.747 [2024-11-26 19:02:59.945144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.747 [2024-11-26 19:02:59.979570] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:08.747 [2024-11-26 19:02:59.979692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.747 [2024-11-26 19:02:59.979723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.747 [2024-11-26 19:02:59.979736] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.747 "name": "raid_bdev1", 00:16:08.747 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:08.747 "strip_size_kb": 64, 00:16:08.747 "state": "online", 00:16:08.747 "raid_level": "raid5f", 00:16:08.747 "superblock": false, 00:16:08.747 "num_base_bdevs": 3, 00:16:08.747 "num_base_bdevs_discovered": 2, 00:16:08.747 "num_base_bdevs_operational": 2, 00:16:08.747 "base_bdevs_list": [ 00:16:08.747 { 00:16:08.747 "name": null, 00:16:08.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.747 "is_configured": false, 00:16:08.747 "data_offset": 0, 00:16:08.747 "data_size": 65536 00:16:08.747 }, 00:16:08.747 { 00:16:08.747 "name": "BaseBdev2", 00:16:08.747 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:08.747 "is_configured": true, 00:16:08.747 "data_offset": 0, 00:16:08.747 "data_size": 65536 00:16:08.747 }, 00:16:08.747 { 00:16:08.747 "name": "BaseBdev3", 00:16:08.747 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:08.747 "is_configured": true, 00:16:08.747 "data_offset": 0, 00:16:08.747 "data_size": 65536 00:16:08.747 } 00:16:08.747 ] 00:16:08.747 }' 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.747 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.314 "name": "raid_bdev1", 00:16:09.314 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:09.314 "strip_size_kb": 64, 00:16:09.314 "state": "online", 00:16:09.314 "raid_level": "raid5f", 00:16:09.314 "superblock": false, 00:16:09.314 "num_base_bdevs": 3, 00:16:09.314 "num_base_bdevs_discovered": 2, 00:16:09.314 "num_base_bdevs_operational": 2, 00:16:09.314 "base_bdevs_list": [ 00:16:09.314 { 00:16:09.314 "name": null, 00:16:09.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.314 "is_configured": false, 00:16:09.314 "data_offset": 0, 00:16:09.314 "data_size": 65536 00:16:09.314 }, 00:16:09.314 { 00:16:09.314 "name": "BaseBdev2", 00:16:09.314 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:09.314 "is_configured": true, 00:16:09.314 "data_offset": 0, 00:16:09.314 "data_size": 65536 00:16:09.314 }, 00:16:09.314 { 00:16:09.314 "name": "BaseBdev3", 00:16:09.314 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:09.314 "is_configured": true, 00:16:09.314 "data_offset": 0, 00:16:09.314 "data_size": 65536 00:16:09.314 } 00:16:09.314 ] 00:16:09.314 }' 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.314 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.573 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.573 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:09.573 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.573 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.573 [2024-11-26 19:03:00.696436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.573 [2024-11-26 19:03:00.711310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:09.573 19:03:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.573 19:03:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:09.573 [2024-11-26 19:03:00.719107] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.510 "name": "raid_bdev1", 00:16:10.510 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:10.510 "strip_size_kb": 64, 00:16:10.510 "state": "online", 00:16:10.510 "raid_level": "raid5f", 00:16:10.510 "superblock": false, 00:16:10.510 "num_base_bdevs": 3, 00:16:10.510 "num_base_bdevs_discovered": 3, 00:16:10.510 "num_base_bdevs_operational": 3, 00:16:10.510 "process": { 00:16:10.510 "type": "rebuild", 00:16:10.510 "target": "spare", 00:16:10.510 "progress": { 00:16:10.510 "blocks": 18432, 00:16:10.510 "percent": 14 00:16:10.510 } 00:16:10.510 }, 00:16:10.510 "base_bdevs_list": [ 00:16:10.510 { 00:16:10.510 "name": "spare", 00:16:10.510 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:10.510 "is_configured": true, 00:16:10.510 "data_offset": 0, 00:16:10.510 "data_size": 65536 00:16:10.510 }, 00:16:10.510 { 00:16:10.510 "name": "BaseBdev2", 00:16:10.510 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:10.510 "is_configured": true, 00:16:10.510 "data_offset": 0, 00:16:10.510 "data_size": 65536 00:16:10.510 }, 00:16:10.510 { 00:16:10.510 "name": "BaseBdev3", 00:16:10.510 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:10.510 "is_configured": true, 00:16:10.510 "data_offset": 0, 00:16:10.510 "data_size": 65536 00:16:10.510 } 00:16:10.510 ] 00:16:10.510 }' 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.510 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=600 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.769 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.769 "name": "raid_bdev1", 00:16:10.770 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:10.770 "strip_size_kb": 64, 00:16:10.770 "state": "online", 00:16:10.770 "raid_level": "raid5f", 00:16:10.770 "superblock": false, 00:16:10.770 "num_base_bdevs": 3, 00:16:10.770 "num_base_bdevs_discovered": 3, 00:16:10.770 "num_base_bdevs_operational": 3, 00:16:10.770 "process": { 00:16:10.770 "type": "rebuild", 00:16:10.770 "target": "spare", 00:16:10.770 "progress": { 00:16:10.770 "blocks": 22528, 00:16:10.770 "percent": 17 00:16:10.770 } 00:16:10.770 }, 00:16:10.770 "base_bdevs_list": [ 00:16:10.770 { 00:16:10.770 "name": "spare", 00:16:10.770 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:10.770 "is_configured": true, 00:16:10.770 "data_offset": 0, 00:16:10.770 "data_size": 65536 00:16:10.770 }, 00:16:10.770 { 00:16:10.770 "name": "BaseBdev2", 00:16:10.770 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:10.770 "is_configured": true, 00:16:10.770 "data_offset": 0, 00:16:10.770 "data_size": 65536 00:16:10.770 }, 00:16:10.770 { 00:16:10.770 "name": "BaseBdev3", 00:16:10.770 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:10.770 "is_configured": true, 00:16:10.770 "data_offset": 0, 00:16:10.770 "data_size": 65536 00:16:10.770 } 00:16:10.770 ] 00:16:10.770 }' 00:16:10.770 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.770 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.770 19:03:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.770 19:03:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.770 19:03:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.706 19:03:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.966 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.966 "name": "raid_bdev1", 00:16:11.966 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:11.966 "strip_size_kb": 64, 00:16:11.966 "state": "online", 00:16:11.966 "raid_level": "raid5f", 00:16:11.966 "superblock": false, 00:16:11.966 "num_base_bdevs": 3, 00:16:11.966 "num_base_bdevs_discovered": 3, 00:16:11.966 "num_base_bdevs_operational": 3, 00:16:11.966 "process": { 00:16:11.966 "type": "rebuild", 00:16:11.966 "target": "spare", 00:16:11.966 "progress": { 00:16:11.966 "blocks": 45056, 00:16:11.966 "percent": 34 00:16:11.966 } 00:16:11.966 }, 00:16:11.966 "base_bdevs_list": [ 00:16:11.966 { 00:16:11.966 "name": "spare", 00:16:11.966 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:11.966 "is_configured": true, 00:16:11.966 "data_offset": 0, 00:16:11.966 "data_size": 65536 00:16:11.966 }, 00:16:11.966 { 00:16:11.966 "name": "BaseBdev2", 00:16:11.966 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:11.966 "is_configured": true, 00:16:11.966 "data_offset": 0, 00:16:11.966 "data_size": 65536 00:16:11.966 }, 00:16:11.966 { 00:16:11.966 "name": "BaseBdev3", 00:16:11.966 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:11.966 "is_configured": true, 00:16:11.966 "data_offset": 0, 00:16:11.966 "data_size": 65536 00:16:11.966 } 00:16:11.966 ] 00:16:11.966 }' 00:16:11.966 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.966 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.966 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.966 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.966 19:03:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.903 "name": "raid_bdev1", 00:16:12.903 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:12.903 "strip_size_kb": 64, 00:16:12.903 "state": "online", 00:16:12.903 "raid_level": "raid5f", 00:16:12.903 "superblock": false, 00:16:12.903 "num_base_bdevs": 3, 00:16:12.903 "num_base_bdevs_discovered": 3, 00:16:12.903 "num_base_bdevs_operational": 3, 00:16:12.903 "process": { 00:16:12.903 "type": "rebuild", 00:16:12.903 "target": "spare", 00:16:12.903 "progress": { 00:16:12.903 "blocks": 69632, 00:16:12.903 "percent": 53 00:16:12.903 } 00:16:12.903 }, 00:16:12.903 "base_bdevs_list": [ 00:16:12.903 { 00:16:12.903 "name": "spare", 00:16:12.903 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:12.903 "is_configured": true, 00:16:12.903 "data_offset": 0, 00:16:12.903 "data_size": 65536 00:16:12.903 }, 00:16:12.903 { 00:16:12.903 "name": "BaseBdev2", 00:16:12.903 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:12.903 "is_configured": true, 00:16:12.903 "data_offset": 0, 00:16:12.903 "data_size": 65536 00:16:12.903 }, 00:16:12.903 { 00:16:12.903 "name": "BaseBdev3", 00:16:12.903 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:12.903 "is_configured": true, 00:16:12.903 "data_offset": 0, 00:16:12.903 "data_size": 65536 00:16:12.903 } 00:16:12.903 ] 00:16:12.903 }' 00:16:12.903 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.162 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.162 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.162 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.162 19:03:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.099 "name": "raid_bdev1", 00:16:14.099 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:14.099 "strip_size_kb": 64, 00:16:14.099 "state": "online", 00:16:14.099 "raid_level": "raid5f", 00:16:14.099 "superblock": false, 00:16:14.099 "num_base_bdevs": 3, 00:16:14.099 "num_base_bdevs_discovered": 3, 00:16:14.099 "num_base_bdevs_operational": 3, 00:16:14.099 "process": { 00:16:14.099 "type": "rebuild", 00:16:14.099 "target": "spare", 00:16:14.099 "progress": { 00:16:14.099 "blocks": 92160, 00:16:14.099 "percent": 70 00:16:14.099 } 00:16:14.099 }, 00:16:14.099 "base_bdevs_list": [ 00:16:14.099 { 00:16:14.099 "name": "spare", 00:16:14.099 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:14.099 "is_configured": true, 00:16:14.099 "data_offset": 0, 00:16:14.099 "data_size": 65536 00:16:14.099 }, 00:16:14.099 { 00:16:14.099 "name": "BaseBdev2", 00:16:14.099 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:14.099 "is_configured": true, 00:16:14.099 "data_offset": 0, 00:16:14.099 "data_size": 65536 00:16:14.099 }, 00:16:14.099 { 00:16:14.099 "name": "BaseBdev3", 00:16:14.099 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:14.099 "is_configured": true, 00:16:14.099 "data_offset": 0, 00:16:14.099 "data_size": 65536 00:16:14.099 } 00:16:14.099 ] 00:16:14.099 }' 00:16:14.099 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.358 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.358 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.358 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.358 19:03:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.293 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.293 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.294 "name": "raid_bdev1", 00:16:15.294 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:15.294 "strip_size_kb": 64, 00:16:15.294 "state": "online", 00:16:15.294 "raid_level": "raid5f", 00:16:15.294 "superblock": false, 00:16:15.294 "num_base_bdevs": 3, 00:16:15.294 "num_base_bdevs_discovered": 3, 00:16:15.294 "num_base_bdevs_operational": 3, 00:16:15.294 "process": { 00:16:15.294 "type": "rebuild", 00:16:15.294 "target": "spare", 00:16:15.294 "progress": { 00:16:15.294 "blocks": 116736, 00:16:15.294 "percent": 89 00:16:15.294 } 00:16:15.294 }, 00:16:15.294 "base_bdevs_list": [ 00:16:15.294 { 00:16:15.294 "name": "spare", 00:16:15.294 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:15.294 "is_configured": true, 00:16:15.294 "data_offset": 0, 00:16:15.294 "data_size": 65536 00:16:15.294 }, 00:16:15.294 { 00:16:15.294 "name": "BaseBdev2", 00:16:15.294 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:15.294 "is_configured": true, 00:16:15.294 "data_offset": 0, 00:16:15.294 "data_size": 65536 00:16:15.294 }, 00:16:15.294 { 00:16:15.294 "name": "BaseBdev3", 00:16:15.294 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:15.294 "is_configured": true, 00:16:15.294 "data_offset": 0, 00:16:15.294 "data_size": 65536 00:16:15.294 } 00:16:15.294 ] 00:16:15.294 }' 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.294 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.553 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.553 19:03:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.140 [2024-11-26 19:03:07.206024] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:16.140 [2024-11-26 19:03:07.206157] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:16.140 [2024-11-26 19:03:07.206223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.402 "name": "raid_bdev1", 00:16:16.402 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:16.402 "strip_size_kb": 64, 00:16:16.402 "state": "online", 00:16:16.402 "raid_level": "raid5f", 00:16:16.402 "superblock": false, 00:16:16.402 "num_base_bdevs": 3, 00:16:16.402 "num_base_bdevs_discovered": 3, 00:16:16.402 "num_base_bdevs_operational": 3, 00:16:16.402 "base_bdevs_list": [ 00:16:16.402 { 00:16:16.402 "name": "spare", 00:16:16.402 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:16.402 "is_configured": true, 00:16:16.402 "data_offset": 0, 00:16:16.402 "data_size": 65536 00:16:16.402 }, 00:16:16.402 { 00:16:16.402 "name": "BaseBdev2", 00:16:16.402 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:16.402 "is_configured": true, 00:16:16.402 "data_offset": 0, 00:16:16.402 "data_size": 65536 00:16:16.402 }, 00:16:16.402 { 00:16:16.402 "name": "BaseBdev3", 00:16:16.402 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:16.402 "is_configured": true, 00:16:16.402 "data_offset": 0, 00:16:16.402 "data_size": 65536 00:16:16.402 } 00:16:16.402 ] 00:16:16.402 }' 00:16:16.402 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.662 "name": "raid_bdev1", 00:16:16.662 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:16.662 "strip_size_kb": 64, 00:16:16.662 "state": "online", 00:16:16.662 "raid_level": "raid5f", 00:16:16.662 "superblock": false, 00:16:16.662 "num_base_bdevs": 3, 00:16:16.662 "num_base_bdevs_discovered": 3, 00:16:16.662 "num_base_bdevs_operational": 3, 00:16:16.662 "base_bdevs_list": [ 00:16:16.662 { 00:16:16.662 "name": "spare", 00:16:16.662 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:16.662 "is_configured": true, 00:16:16.662 "data_offset": 0, 00:16:16.662 "data_size": 65536 00:16:16.662 }, 00:16:16.662 { 00:16:16.662 "name": "BaseBdev2", 00:16:16.662 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:16.662 "is_configured": true, 00:16:16.662 "data_offset": 0, 00:16:16.662 "data_size": 65536 00:16:16.662 }, 00:16:16.662 { 00:16:16.662 "name": "BaseBdev3", 00:16:16.662 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:16.662 "is_configured": true, 00:16:16.662 "data_offset": 0, 00:16:16.662 "data_size": 65536 00:16:16.662 } 00:16:16.662 ] 00:16:16.662 }' 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.662 19:03:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.662 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.948 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.948 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.948 "name": "raid_bdev1", 00:16:16.948 "uuid": "ed971e90-01e6-4654-b8bb-17a69641e689", 00:16:16.948 "strip_size_kb": 64, 00:16:16.948 "state": "online", 00:16:16.948 "raid_level": "raid5f", 00:16:16.948 "superblock": false, 00:16:16.948 "num_base_bdevs": 3, 00:16:16.948 "num_base_bdevs_discovered": 3, 00:16:16.948 "num_base_bdevs_operational": 3, 00:16:16.948 "base_bdevs_list": [ 00:16:16.948 { 00:16:16.948 "name": "spare", 00:16:16.948 "uuid": "90cc22f2-75f0-5001-971c-b93373168db3", 00:16:16.948 "is_configured": true, 00:16:16.948 "data_offset": 0, 00:16:16.948 "data_size": 65536 00:16:16.948 }, 00:16:16.948 { 00:16:16.948 "name": "BaseBdev2", 00:16:16.948 "uuid": "998aba7c-2e01-58b9-9c00-8e87c86ecb69", 00:16:16.948 "is_configured": true, 00:16:16.948 "data_offset": 0, 00:16:16.948 "data_size": 65536 00:16:16.948 }, 00:16:16.948 { 00:16:16.948 "name": "BaseBdev3", 00:16:16.948 "uuid": "30d0de33-cfef-5cfb-9829-c7bca7cddc4a", 00:16:16.948 "is_configured": true, 00:16:16.948 "data_offset": 0, 00:16:16.948 "data_size": 65536 00:16:16.948 } 00:16:16.948 ] 00:16:16.948 }' 00:16:16.948 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.948 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.516 [2024-11-26 19:03:08.590259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.516 [2024-11-26 19:03:08.590295] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.516 [2024-11-26 19:03:08.590418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.516 [2024-11-26 19:03:08.590531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.516 [2024-11-26 19:03:08.590557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.516 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:17.776 /dev/nbd0 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.776 1+0 records in 00:16:17.776 1+0 records out 00:16:17.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282932 s, 14.5 MB/s 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.776 19:03:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:18.037 /dev/nbd1 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.037 1+0 records in 00:16:18.037 1+0 records out 00:16:18.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278743 s, 14.7 MB/s 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:18.037 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:18.295 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:18.295 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.295 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:18.295 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.295 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:18.295 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.295 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.555 19:03:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81996 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81996 ']' 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81996 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81996 00:16:18.814 killing process with pid 81996 00:16:18.814 Received shutdown signal, test time was about 60.000000 seconds 00:16:18.814 00:16:18.814 Latency(us) 00:16:18.814 [2024-11-26T19:03:10.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.814 [2024-11-26T19:03:10.181Z] =================================================================================================================== 00:16:18.814 [2024-11-26T19:03:10.181Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81996' 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81996 00:16:18.814 [2024-11-26 19:03:10.130727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.814 19:03:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81996 00:16:19.381 [2024-11-26 19:03:10.486068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:20.317 00:16:20.317 real 0m16.414s 00:16:20.317 user 0m20.892s 00:16:20.317 sys 0m2.099s 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.317 ************************************ 00:16:20.317 END TEST raid5f_rebuild_test 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.317 ************************************ 00:16:20.317 19:03:11 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:20.317 19:03:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:20.317 19:03:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.317 19:03:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.317 ************************************ 00:16:20.317 START TEST raid5f_rebuild_test_sb 00:16:20.317 ************************************ 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82445 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82445 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82445 ']' 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.317 19:03:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.576 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:20.576 Zero copy mechanism will not be used. 00:16:20.576 [2024-11-26 19:03:11.690564] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:16:20.576 [2024-11-26 19:03:11.690702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82445 ] 00:16:20.576 [2024-11-26 19:03:11.866930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.835 [2024-11-26 19:03:11.998513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.139 [2024-11-26 19:03:12.207358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.139 [2024-11-26 19:03:12.207440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.398 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.398 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:21.398 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.398 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:21.398 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.398 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.658 BaseBdev1_malloc 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.658 [2024-11-26 19:03:12.806009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:21.658 [2024-11-26 19:03:12.806092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.658 [2024-11-26 19:03:12.806123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:21.658 [2024-11-26 19:03:12.806141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.658 [2024-11-26 19:03:12.809038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.658 [2024-11-26 19:03:12.809085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:21.658 BaseBdev1 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.658 BaseBdev2_malloc 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.658 [2024-11-26 19:03:12.859293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:21.658 [2024-11-26 19:03:12.859368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.658 [2024-11-26 19:03:12.859399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:21.658 [2024-11-26 19:03:12.859417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.658 [2024-11-26 19:03:12.862412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.658 [2024-11-26 19:03:12.862455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:21.658 BaseBdev2 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.658 BaseBdev3_malloc 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.658 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.659 [2024-11-26 19:03:12.916818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:21.659 [2024-11-26 19:03:12.916882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.659 [2024-11-26 19:03:12.916928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:21.659 [2024-11-26 19:03:12.916949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.659 [2024-11-26 19:03:12.919668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.659 [2024-11-26 19:03:12.919714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:21.659 BaseBdev3 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.659 spare_malloc 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.659 spare_delay 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.659 [2024-11-26 19:03:12.978222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:21.659 [2024-11-26 19:03:12.978291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.659 [2024-11-26 19:03:12.978317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:21.659 [2024-11-26 19:03:12.978335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.659 [2024-11-26 19:03:12.981361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.659 [2024-11-26 19:03:12.981413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:21.659 spare 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.659 [2024-11-26 19:03:12.986452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.659 [2024-11-26 19:03:12.989233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.659 [2024-11-26 19:03:12.989363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.659 [2024-11-26 19:03:12.989722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:21.659 [2024-11-26 19:03:12.989780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:21.659 [2024-11-26 19:03:12.990209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:21.659 [2024-11-26 19:03:12.995610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:21.659 [2024-11-26 19:03:12.995812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:21.659 [2024-11-26 19:03:12.996195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.659 19:03:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.659 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.659 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.659 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.659 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.659 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.659 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.918 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.918 "name": "raid_bdev1", 00:16:21.918 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:21.918 "strip_size_kb": 64, 00:16:21.918 "state": "online", 00:16:21.918 "raid_level": "raid5f", 00:16:21.918 "superblock": true, 00:16:21.918 "num_base_bdevs": 3, 00:16:21.918 "num_base_bdevs_discovered": 3, 00:16:21.918 "num_base_bdevs_operational": 3, 00:16:21.918 "base_bdevs_list": [ 00:16:21.918 { 00:16:21.918 "name": "BaseBdev1", 00:16:21.918 "uuid": "fa1b26ed-99b5-5031-aa1a-4b38d4656510", 00:16:21.918 "is_configured": true, 00:16:21.918 "data_offset": 2048, 00:16:21.918 "data_size": 63488 00:16:21.918 }, 00:16:21.918 { 00:16:21.918 "name": "BaseBdev2", 00:16:21.918 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:21.918 "is_configured": true, 00:16:21.918 "data_offset": 2048, 00:16:21.918 "data_size": 63488 00:16:21.918 }, 00:16:21.918 { 00:16:21.918 "name": "BaseBdev3", 00:16:21.918 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:21.918 "is_configured": true, 00:16:21.918 "data_offset": 2048, 00:16:21.918 "data_size": 63488 00:16:21.918 } 00:16:21.918 ] 00:16:21.918 }' 00:16:21.918 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.918 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.177 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:22.177 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.177 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.177 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.177 [2024-11-26 19:03:13.534606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.437 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:22.695 [2024-11-26 19:03:13.918548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:22.695 /dev/nbd0 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.695 1+0 records in 00:16:22.695 1+0 records out 00:16:22.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040651 s, 10.1 MB/s 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:22.695 19:03:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:23.261 496+0 records in 00:16:23.261 496+0 records out 00:16:23.261 65011712 bytes (65 MB, 62 MiB) copied, 0.480752 s, 135 MB/s 00:16:23.261 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:23.261 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.261 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:23.261 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:23.261 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:23.261 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.261 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:23.519 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:23.519 [2024-11-26 19:03:14.767180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 [2024-11-26 19:03:14.781224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.520 "name": "raid_bdev1", 00:16:23.520 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:23.520 "strip_size_kb": 64, 00:16:23.520 "state": "online", 00:16:23.520 "raid_level": "raid5f", 00:16:23.520 "superblock": true, 00:16:23.520 "num_base_bdevs": 3, 00:16:23.520 "num_base_bdevs_discovered": 2, 00:16:23.520 "num_base_bdevs_operational": 2, 00:16:23.520 "base_bdevs_list": [ 00:16:23.520 { 00:16:23.520 "name": null, 00:16:23.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.520 "is_configured": false, 00:16:23.520 "data_offset": 0, 00:16:23.520 "data_size": 63488 00:16:23.520 }, 00:16:23.520 { 00:16:23.520 "name": "BaseBdev2", 00:16:23.520 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:23.520 "is_configured": true, 00:16:23.520 "data_offset": 2048, 00:16:23.520 "data_size": 63488 00:16:23.520 }, 00:16:23.520 { 00:16:23.520 "name": "BaseBdev3", 00:16:23.520 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:23.520 "is_configured": true, 00:16:23.520 "data_offset": 2048, 00:16:23.520 "data_size": 63488 00:16:23.520 } 00:16:23.520 ] 00:16:23.520 }' 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.520 19:03:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.087 19:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:24.087 19:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.087 19:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.087 [2024-11-26 19:03:15.301407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.087 [2024-11-26 19:03:15.317479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:24.087 19:03:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.087 19:03:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:24.087 [2024-11-26 19:03:15.325010] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.023 "name": "raid_bdev1", 00:16:25.023 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:25.023 "strip_size_kb": 64, 00:16:25.023 "state": "online", 00:16:25.023 "raid_level": "raid5f", 00:16:25.023 "superblock": true, 00:16:25.023 "num_base_bdevs": 3, 00:16:25.023 "num_base_bdevs_discovered": 3, 00:16:25.023 "num_base_bdevs_operational": 3, 00:16:25.023 "process": { 00:16:25.023 "type": "rebuild", 00:16:25.023 "target": "spare", 00:16:25.023 "progress": { 00:16:25.023 "blocks": 18432, 00:16:25.023 "percent": 14 00:16:25.023 } 00:16:25.023 }, 00:16:25.023 "base_bdevs_list": [ 00:16:25.023 { 00:16:25.023 "name": "spare", 00:16:25.023 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:25.023 "is_configured": true, 00:16:25.023 "data_offset": 2048, 00:16:25.023 "data_size": 63488 00:16:25.023 }, 00:16:25.023 { 00:16:25.023 "name": "BaseBdev2", 00:16:25.023 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:25.023 "is_configured": true, 00:16:25.023 "data_offset": 2048, 00:16:25.023 "data_size": 63488 00:16:25.023 }, 00:16:25.023 { 00:16:25.023 "name": "BaseBdev3", 00:16:25.023 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:25.023 "is_configured": true, 00:16:25.023 "data_offset": 2048, 00:16:25.023 "data_size": 63488 00:16:25.023 } 00:16:25.023 ] 00:16:25.023 }' 00:16:25.023 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.282 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.283 [2024-11-26 19:03:16.482864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.283 [2024-11-26 19:03:16.537471] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:25.283 [2024-11-26 19:03:16.537558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.283 [2024-11-26 19:03:16.537589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.283 [2024-11-26 19:03:16.537601] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.283 "name": "raid_bdev1", 00:16:25.283 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:25.283 "strip_size_kb": 64, 00:16:25.283 "state": "online", 00:16:25.283 "raid_level": "raid5f", 00:16:25.283 "superblock": true, 00:16:25.283 "num_base_bdevs": 3, 00:16:25.283 "num_base_bdevs_discovered": 2, 00:16:25.283 "num_base_bdevs_operational": 2, 00:16:25.283 "base_bdevs_list": [ 00:16:25.283 { 00:16:25.283 "name": null, 00:16:25.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.283 "is_configured": false, 00:16:25.283 "data_offset": 0, 00:16:25.283 "data_size": 63488 00:16:25.283 }, 00:16:25.283 { 00:16:25.283 "name": "BaseBdev2", 00:16:25.283 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:25.283 "is_configured": true, 00:16:25.283 "data_offset": 2048, 00:16:25.283 "data_size": 63488 00:16:25.283 }, 00:16:25.283 { 00:16:25.283 "name": "BaseBdev3", 00:16:25.283 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:25.283 "is_configured": true, 00:16:25.283 "data_offset": 2048, 00:16:25.283 "data_size": 63488 00:16:25.283 } 00:16:25.283 ] 00:16:25.283 }' 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.283 19:03:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.851 "name": "raid_bdev1", 00:16:25.851 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:25.851 "strip_size_kb": 64, 00:16:25.851 "state": "online", 00:16:25.851 "raid_level": "raid5f", 00:16:25.851 "superblock": true, 00:16:25.851 "num_base_bdevs": 3, 00:16:25.851 "num_base_bdevs_discovered": 2, 00:16:25.851 "num_base_bdevs_operational": 2, 00:16:25.851 "base_bdevs_list": [ 00:16:25.851 { 00:16:25.851 "name": null, 00:16:25.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.851 "is_configured": false, 00:16:25.851 "data_offset": 0, 00:16:25.851 "data_size": 63488 00:16:25.851 }, 00:16:25.851 { 00:16:25.851 "name": "BaseBdev2", 00:16:25.851 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:25.851 "is_configured": true, 00:16:25.851 "data_offset": 2048, 00:16:25.851 "data_size": 63488 00:16:25.851 }, 00:16:25.851 { 00:16:25.851 "name": "BaseBdev3", 00:16:25.851 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:25.851 "is_configured": true, 00:16:25.851 "data_offset": 2048, 00:16:25.851 "data_size": 63488 00:16:25.851 } 00:16:25.851 ] 00:16:25.851 }' 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.851 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.851 [2024-11-26 19:03:17.213858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.110 [2024-11-26 19:03:17.228623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:26.110 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.110 19:03:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:26.110 [2024-11-26 19:03:17.236069] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.047 "name": "raid_bdev1", 00:16:27.047 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:27.047 "strip_size_kb": 64, 00:16:27.047 "state": "online", 00:16:27.047 "raid_level": "raid5f", 00:16:27.047 "superblock": true, 00:16:27.047 "num_base_bdevs": 3, 00:16:27.047 "num_base_bdevs_discovered": 3, 00:16:27.047 "num_base_bdevs_operational": 3, 00:16:27.047 "process": { 00:16:27.047 "type": "rebuild", 00:16:27.047 "target": "spare", 00:16:27.047 "progress": { 00:16:27.047 "blocks": 18432, 00:16:27.047 "percent": 14 00:16:27.047 } 00:16:27.047 }, 00:16:27.047 "base_bdevs_list": [ 00:16:27.047 { 00:16:27.047 "name": "spare", 00:16:27.047 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:27.047 "is_configured": true, 00:16:27.047 "data_offset": 2048, 00:16:27.047 "data_size": 63488 00:16:27.047 }, 00:16:27.047 { 00:16:27.047 "name": "BaseBdev2", 00:16:27.047 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:27.047 "is_configured": true, 00:16:27.047 "data_offset": 2048, 00:16:27.047 "data_size": 63488 00:16:27.047 }, 00:16:27.047 { 00:16:27.047 "name": "BaseBdev3", 00:16:27.047 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:27.047 "is_configured": true, 00:16:27.047 "data_offset": 2048, 00:16:27.047 "data_size": 63488 00:16:27.047 } 00:16:27.047 ] 00:16:27.047 }' 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:27.047 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:27.047 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=617 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.048 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.308 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.308 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.308 "name": "raid_bdev1", 00:16:27.308 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:27.308 "strip_size_kb": 64, 00:16:27.308 "state": "online", 00:16:27.308 "raid_level": "raid5f", 00:16:27.308 "superblock": true, 00:16:27.308 "num_base_bdevs": 3, 00:16:27.308 "num_base_bdevs_discovered": 3, 00:16:27.308 "num_base_bdevs_operational": 3, 00:16:27.308 "process": { 00:16:27.308 "type": "rebuild", 00:16:27.308 "target": "spare", 00:16:27.308 "progress": { 00:16:27.308 "blocks": 22528, 00:16:27.308 "percent": 17 00:16:27.308 } 00:16:27.308 }, 00:16:27.308 "base_bdevs_list": [ 00:16:27.308 { 00:16:27.308 "name": "spare", 00:16:27.308 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:27.308 "is_configured": true, 00:16:27.308 "data_offset": 2048, 00:16:27.308 "data_size": 63488 00:16:27.308 }, 00:16:27.308 { 00:16:27.308 "name": "BaseBdev2", 00:16:27.308 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:27.308 "is_configured": true, 00:16:27.308 "data_offset": 2048, 00:16:27.308 "data_size": 63488 00:16:27.308 }, 00:16:27.308 { 00:16:27.308 "name": "BaseBdev3", 00:16:27.308 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:27.308 "is_configured": true, 00:16:27.308 "data_offset": 2048, 00:16:27.308 "data_size": 63488 00:16:27.308 } 00:16:27.308 ] 00:16:27.308 }' 00:16:27.308 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.308 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.308 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.308 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.308 19:03:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.244 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.503 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.503 "name": "raid_bdev1", 00:16:28.503 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:28.503 "strip_size_kb": 64, 00:16:28.503 "state": "online", 00:16:28.503 "raid_level": "raid5f", 00:16:28.503 "superblock": true, 00:16:28.503 "num_base_bdevs": 3, 00:16:28.503 "num_base_bdevs_discovered": 3, 00:16:28.503 "num_base_bdevs_operational": 3, 00:16:28.503 "process": { 00:16:28.503 "type": "rebuild", 00:16:28.503 "target": "spare", 00:16:28.503 "progress": { 00:16:28.503 "blocks": 47104, 00:16:28.503 "percent": 37 00:16:28.503 } 00:16:28.503 }, 00:16:28.503 "base_bdevs_list": [ 00:16:28.503 { 00:16:28.503 "name": "spare", 00:16:28.503 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:28.503 "is_configured": true, 00:16:28.503 "data_offset": 2048, 00:16:28.503 "data_size": 63488 00:16:28.503 }, 00:16:28.503 { 00:16:28.503 "name": "BaseBdev2", 00:16:28.503 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:28.503 "is_configured": true, 00:16:28.503 "data_offset": 2048, 00:16:28.503 "data_size": 63488 00:16:28.503 }, 00:16:28.503 { 00:16:28.503 "name": "BaseBdev3", 00:16:28.503 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:28.503 "is_configured": true, 00:16:28.503 "data_offset": 2048, 00:16:28.503 "data_size": 63488 00:16:28.503 } 00:16:28.503 ] 00:16:28.503 }' 00:16:28.504 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.504 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.504 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.504 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.504 19:03:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.490 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.490 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.490 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.490 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.490 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.490 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.490 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.491 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.491 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.491 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.491 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.491 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.491 "name": "raid_bdev1", 00:16:29.491 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:29.491 "strip_size_kb": 64, 00:16:29.491 "state": "online", 00:16:29.491 "raid_level": "raid5f", 00:16:29.491 "superblock": true, 00:16:29.491 "num_base_bdevs": 3, 00:16:29.491 "num_base_bdevs_discovered": 3, 00:16:29.491 "num_base_bdevs_operational": 3, 00:16:29.491 "process": { 00:16:29.491 "type": "rebuild", 00:16:29.491 "target": "spare", 00:16:29.491 "progress": { 00:16:29.491 "blocks": 69632, 00:16:29.491 "percent": 54 00:16:29.491 } 00:16:29.491 }, 00:16:29.491 "base_bdevs_list": [ 00:16:29.491 { 00:16:29.491 "name": "spare", 00:16:29.491 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:29.491 "is_configured": true, 00:16:29.491 "data_offset": 2048, 00:16:29.491 "data_size": 63488 00:16:29.491 }, 00:16:29.491 { 00:16:29.491 "name": "BaseBdev2", 00:16:29.491 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:29.491 "is_configured": true, 00:16:29.491 "data_offset": 2048, 00:16:29.491 "data_size": 63488 00:16:29.491 }, 00:16:29.491 { 00:16:29.491 "name": "BaseBdev3", 00:16:29.491 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:29.491 "is_configured": true, 00:16:29.491 "data_offset": 2048, 00:16:29.491 "data_size": 63488 00:16:29.491 } 00:16:29.491 ] 00:16:29.491 }' 00:16:29.491 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.491 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.491 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.749 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.749 19:03:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.686 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.686 "name": "raid_bdev1", 00:16:30.686 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:30.686 "strip_size_kb": 64, 00:16:30.686 "state": "online", 00:16:30.686 "raid_level": "raid5f", 00:16:30.686 "superblock": true, 00:16:30.686 "num_base_bdevs": 3, 00:16:30.686 "num_base_bdevs_discovered": 3, 00:16:30.686 "num_base_bdevs_operational": 3, 00:16:30.686 "process": { 00:16:30.686 "type": "rebuild", 00:16:30.686 "target": "spare", 00:16:30.686 "progress": { 00:16:30.686 "blocks": 94208, 00:16:30.686 "percent": 74 00:16:30.686 } 00:16:30.686 }, 00:16:30.686 "base_bdevs_list": [ 00:16:30.686 { 00:16:30.686 "name": "spare", 00:16:30.686 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:30.687 "is_configured": true, 00:16:30.687 "data_offset": 2048, 00:16:30.687 "data_size": 63488 00:16:30.687 }, 00:16:30.687 { 00:16:30.687 "name": "BaseBdev2", 00:16:30.687 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:30.687 "is_configured": true, 00:16:30.687 "data_offset": 2048, 00:16:30.687 "data_size": 63488 00:16:30.687 }, 00:16:30.687 { 00:16:30.687 "name": "BaseBdev3", 00:16:30.687 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:30.687 "is_configured": true, 00:16:30.687 "data_offset": 2048, 00:16:30.687 "data_size": 63488 00:16:30.687 } 00:16:30.687 ] 00:16:30.687 }' 00:16:30.687 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.687 19:03:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.687 19:03:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.687 19:03:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.687 19:03:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.067 "name": "raid_bdev1", 00:16:32.067 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:32.067 "strip_size_kb": 64, 00:16:32.067 "state": "online", 00:16:32.067 "raid_level": "raid5f", 00:16:32.067 "superblock": true, 00:16:32.067 "num_base_bdevs": 3, 00:16:32.067 "num_base_bdevs_discovered": 3, 00:16:32.067 "num_base_bdevs_operational": 3, 00:16:32.067 "process": { 00:16:32.067 "type": "rebuild", 00:16:32.067 "target": "spare", 00:16:32.067 "progress": { 00:16:32.067 "blocks": 116736, 00:16:32.067 "percent": 91 00:16:32.067 } 00:16:32.067 }, 00:16:32.067 "base_bdevs_list": [ 00:16:32.067 { 00:16:32.067 "name": "spare", 00:16:32.067 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:32.067 "is_configured": true, 00:16:32.067 "data_offset": 2048, 00:16:32.067 "data_size": 63488 00:16:32.067 }, 00:16:32.067 { 00:16:32.067 "name": "BaseBdev2", 00:16:32.067 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:32.067 "is_configured": true, 00:16:32.067 "data_offset": 2048, 00:16:32.067 "data_size": 63488 00:16:32.067 }, 00:16:32.067 { 00:16:32.067 "name": "BaseBdev3", 00:16:32.067 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:32.067 "is_configured": true, 00:16:32.067 "data_offset": 2048, 00:16:32.067 "data_size": 63488 00:16:32.067 } 00:16:32.067 ] 00:16:32.067 }' 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.067 19:03:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.327 [2024-11-26 19:03:23.510388] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:32.327 [2024-11-26 19:03:23.510508] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:32.327 [2024-11-26 19:03:23.510695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.896 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.155 "name": "raid_bdev1", 00:16:33.155 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:33.155 "strip_size_kb": 64, 00:16:33.155 "state": "online", 00:16:33.155 "raid_level": "raid5f", 00:16:33.155 "superblock": true, 00:16:33.155 "num_base_bdevs": 3, 00:16:33.155 "num_base_bdevs_discovered": 3, 00:16:33.155 "num_base_bdevs_operational": 3, 00:16:33.155 "base_bdevs_list": [ 00:16:33.155 { 00:16:33.155 "name": "spare", 00:16:33.155 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:33.155 "is_configured": true, 00:16:33.155 "data_offset": 2048, 00:16:33.155 "data_size": 63488 00:16:33.155 }, 00:16:33.155 { 00:16:33.155 "name": "BaseBdev2", 00:16:33.155 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:33.155 "is_configured": true, 00:16:33.155 "data_offset": 2048, 00:16:33.155 "data_size": 63488 00:16:33.155 }, 00:16:33.155 { 00:16:33.155 "name": "BaseBdev3", 00:16:33.155 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:33.155 "is_configured": true, 00:16:33.155 "data_offset": 2048, 00:16:33.155 "data_size": 63488 00:16:33.155 } 00:16:33.155 ] 00:16:33.155 }' 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.155 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.155 "name": "raid_bdev1", 00:16:33.155 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:33.155 "strip_size_kb": 64, 00:16:33.156 "state": "online", 00:16:33.156 "raid_level": "raid5f", 00:16:33.156 "superblock": true, 00:16:33.156 "num_base_bdevs": 3, 00:16:33.156 "num_base_bdevs_discovered": 3, 00:16:33.156 "num_base_bdevs_operational": 3, 00:16:33.156 "base_bdevs_list": [ 00:16:33.156 { 00:16:33.156 "name": "spare", 00:16:33.156 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:33.156 "is_configured": true, 00:16:33.156 "data_offset": 2048, 00:16:33.156 "data_size": 63488 00:16:33.156 }, 00:16:33.156 { 00:16:33.156 "name": "BaseBdev2", 00:16:33.156 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:33.156 "is_configured": true, 00:16:33.156 "data_offset": 2048, 00:16:33.156 "data_size": 63488 00:16:33.156 }, 00:16:33.156 { 00:16:33.156 "name": "BaseBdev3", 00:16:33.156 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:33.156 "is_configured": true, 00:16:33.156 "data_offset": 2048, 00:16:33.156 "data_size": 63488 00:16:33.156 } 00:16:33.156 ] 00:16:33.156 }' 00:16:33.156 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.156 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.156 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.416 "name": "raid_bdev1", 00:16:33.416 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:33.416 "strip_size_kb": 64, 00:16:33.416 "state": "online", 00:16:33.416 "raid_level": "raid5f", 00:16:33.416 "superblock": true, 00:16:33.416 "num_base_bdevs": 3, 00:16:33.416 "num_base_bdevs_discovered": 3, 00:16:33.416 "num_base_bdevs_operational": 3, 00:16:33.416 "base_bdevs_list": [ 00:16:33.416 { 00:16:33.416 "name": "spare", 00:16:33.416 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:33.416 "is_configured": true, 00:16:33.416 "data_offset": 2048, 00:16:33.416 "data_size": 63488 00:16:33.416 }, 00:16:33.416 { 00:16:33.416 "name": "BaseBdev2", 00:16:33.416 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:33.416 "is_configured": true, 00:16:33.416 "data_offset": 2048, 00:16:33.416 "data_size": 63488 00:16:33.416 }, 00:16:33.416 { 00:16:33.416 "name": "BaseBdev3", 00:16:33.416 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:33.416 "is_configured": true, 00:16:33.416 "data_offset": 2048, 00:16:33.416 "data_size": 63488 00:16:33.416 } 00:16:33.416 ] 00:16:33.416 }' 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.416 19:03:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.985 [2024-11-26 19:03:25.100517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.985 [2024-11-26 19:03:25.100558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.985 [2024-11-26 19:03:25.100679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.985 [2024-11-26 19:03:25.100790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.985 [2024-11-26 19:03:25.100815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.985 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:34.244 /dev/nbd0 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.244 1+0 records in 00:16:34.244 1+0 records out 00:16:34.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301303 s, 13.6 MB/s 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.244 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:34.504 /dev/nbd1 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.504 1+0 records in 00:16:34.504 1+0 records out 00:16:34.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449583 s, 9.1 MB/s 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.504 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:34.763 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:34.763 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.763 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.763 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.763 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:34.763 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.763 19:03:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.022 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:35.280 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:35.280 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:35.280 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:35.280 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.280 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.281 [2024-11-26 19:03:26.569918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.281 [2024-11-26 19:03:26.570014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.281 [2024-11-26 19:03:26.570043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:35.281 [2024-11-26 19:03:26.570061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.281 [2024-11-26 19:03:26.573145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.281 [2024-11-26 19:03:26.573193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.281 [2024-11-26 19:03:26.573323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:35.281 [2024-11-26 19:03:26.573406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.281 [2024-11-26 19:03:26.573586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.281 [2024-11-26 19:03:26.573730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.281 spare 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.281 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.539 [2024-11-26 19:03:26.673868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:35.539 [2024-11-26 19:03:26.673989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:35.539 [2024-11-26 19:03:26.674489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:35.539 [2024-11-26 19:03:26.679165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:35.540 [2024-11-26 19:03:26.679193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:35.540 [2024-11-26 19:03:26.679518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.540 "name": "raid_bdev1", 00:16:35.540 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:35.540 "strip_size_kb": 64, 00:16:35.540 "state": "online", 00:16:35.540 "raid_level": "raid5f", 00:16:35.540 "superblock": true, 00:16:35.540 "num_base_bdevs": 3, 00:16:35.540 "num_base_bdevs_discovered": 3, 00:16:35.540 "num_base_bdevs_operational": 3, 00:16:35.540 "base_bdevs_list": [ 00:16:35.540 { 00:16:35.540 "name": "spare", 00:16:35.540 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:35.540 "is_configured": true, 00:16:35.540 "data_offset": 2048, 00:16:35.540 "data_size": 63488 00:16:35.540 }, 00:16:35.540 { 00:16:35.540 "name": "BaseBdev2", 00:16:35.540 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:35.540 "is_configured": true, 00:16:35.540 "data_offset": 2048, 00:16:35.540 "data_size": 63488 00:16:35.540 }, 00:16:35.540 { 00:16:35.540 "name": "BaseBdev3", 00:16:35.540 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:35.540 "is_configured": true, 00:16:35.540 "data_offset": 2048, 00:16:35.540 "data_size": 63488 00:16:35.540 } 00:16:35.540 ] 00:16:35.540 }' 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.540 19:03:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.107 "name": "raid_bdev1", 00:16:36.107 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:36.107 "strip_size_kb": 64, 00:16:36.107 "state": "online", 00:16:36.107 "raid_level": "raid5f", 00:16:36.107 "superblock": true, 00:16:36.107 "num_base_bdevs": 3, 00:16:36.107 "num_base_bdevs_discovered": 3, 00:16:36.107 "num_base_bdevs_operational": 3, 00:16:36.107 "base_bdevs_list": [ 00:16:36.107 { 00:16:36.107 "name": "spare", 00:16:36.107 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:36.107 "is_configured": true, 00:16:36.107 "data_offset": 2048, 00:16:36.107 "data_size": 63488 00:16:36.107 }, 00:16:36.107 { 00:16:36.107 "name": "BaseBdev2", 00:16:36.107 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:36.107 "is_configured": true, 00:16:36.107 "data_offset": 2048, 00:16:36.107 "data_size": 63488 00:16:36.107 }, 00:16:36.107 { 00:16:36.107 "name": "BaseBdev3", 00:16:36.107 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:36.107 "is_configured": true, 00:16:36.107 "data_offset": 2048, 00:16:36.107 "data_size": 63488 00:16:36.107 } 00:16:36.107 ] 00:16:36.107 }' 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.107 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.107 [2024-11-26 19:03:27.441419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.108 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.366 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.366 "name": "raid_bdev1", 00:16:36.366 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:36.366 "strip_size_kb": 64, 00:16:36.366 "state": "online", 00:16:36.366 "raid_level": "raid5f", 00:16:36.366 "superblock": true, 00:16:36.366 "num_base_bdevs": 3, 00:16:36.366 "num_base_bdevs_discovered": 2, 00:16:36.366 "num_base_bdevs_operational": 2, 00:16:36.366 "base_bdevs_list": [ 00:16:36.366 { 00:16:36.366 "name": null, 00:16:36.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.366 "is_configured": false, 00:16:36.366 "data_offset": 0, 00:16:36.366 "data_size": 63488 00:16:36.366 }, 00:16:36.366 { 00:16:36.366 "name": "BaseBdev2", 00:16:36.366 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:36.366 "is_configured": true, 00:16:36.366 "data_offset": 2048, 00:16:36.366 "data_size": 63488 00:16:36.366 }, 00:16:36.366 { 00:16:36.366 "name": "BaseBdev3", 00:16:36.366 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:36.367 "is_configured": true, 00:16:36.367 "data_offset": 2048, 00:16:36.367 "data_size": 63488 00:16:36.367 } 00:16:36.367 ] 00:16:36.367 }' 00:16:36.367 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.367 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.625 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.625 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.625 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.625 [2024-11-26 19:03:27.981691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.625 [2024-11-26 19:03:27.981994] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:36.625 [2024-11-26 19:03:27.982022] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:36.625 [2024-11-26 19:03:27.982073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.884 [2024-11-26 19:03:27.997251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:36.884 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.884 19:03:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:36.884 [2024-11-26 19:03:28.004613] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.822 "name": "raid_bdev1", 00:16:37.822 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:37.822 "strip_size_kb": 64, 00:16:37.822 "state": "online", 00:16:37.822 "raid_level": "raid5f", 00:16:37.822 "superblock": true, 00:16:37.822 "num_base_bdevs": 3, 00:16:37.822 "num_base_bdevs_discovered": 3, 00:16:37.822 "num_base_bdevs_operational": 3, 00:16:37.822 "process": { 00:16:37.822 "type": "rebuild", 00:16:37.822 "target": "spare", 00:16:37.822 "progress": { 00:16:37.822 "blocks": 18432, 00:16:37.822 "percent": 14 00:16:37.822 } 00:16:37.822 }, 00:16:37.822 "base_bdevs_list": [ 00:16:37.822 { 00:16:37.822 "name": "spare", 00:16:37.822 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:37.822 "is_configured": true, 00:16:37.822 "data_offset": 2048, 00:16:37.822 "data_size": 63488 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "name": "BaseBdev2", 00:16:37.822 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:37.822 "is_configured": true, 00:16:37.822 "data_offset": 2048, 00:16:37.822 "data_size": 63488 00:16:37.822 }, 00:16:37.822 { 00:16:37.822 "name": "BaseBdev3", 00:16:37.822 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:37.822 "is_configured": true, 00:16:37.822 "data_offset": 2048, 00:16:37.822 "data_size": 63488 00:16:37.822 } 00:16:37.822 ] 00:16:37.822 }' 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.822 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.822 [2024-11-26 19:03:29.166456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.081 [2024-11-26 19:03:29.218390] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.081 [2024-11-26 19:03:29.218478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.081 [2024-11-26 19:03:29.218502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.081 [2024-11-26 19:03:29.218515] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.081 "name": "raid_bdev1", 00:16:38.081 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:38.081 "strip_size_kb": 64, 00:16:38.081 "state": "online", 00:16:38.081 "raid_level": "raid5f", 00:16:38.081 "superblock": true, 00:16:38.081 "num_base_bdevs": 3, 00:16:38.081 "num_base_bdevs_discovered": 2, 00:16:38.081 "num_base_bdevs_operational": 2, 00:16:38.081 "base_bdevs_list": [ 00:16:38.081 { 00:16:38.081 "name": null, 00:16:38.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.081 "is_configured": false, 00:16:38.081 "data_offset": 0, 00:16:38.081 "data_size": 63488 00:16:38.081 }, 00:16:38.081 { 00:16:38.081 "name": "BaseBdev2", 00:16:38.081 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:38.081 "is_configured": true, 00:16:38.081 "data_offset": 2048, 00:16:38.081 "data_size": 63488 00:16:38.081 }, 00:16:38.081 { 00:16:38.081 "name": "BaseBdev3", 00:16:38.081 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:38.081 "is_configured": true, 00:16:38.081 "data_offset": 2048, 00:16:38.081 "data_size": 63488 00:16:38.081 } 00:16:38.081 ] 00:16:38.081 }' 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.081 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.648 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.648 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.648 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.648 [2024-11-26 19:03:29.785736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.648 [2024-11-26 19:03:29.785833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.648 [2024-11-26 19:03:29.785864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:38.648 [2024-11-26 19:03:29.785900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.648 [2024-11-26 19:03:29.786601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.648 [2024-11-26 19:03:29.786641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.648 [2024-11-26 19:03:29.786780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:38.648 [2024-11-26 19:03:29.786807] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:38.648 [2024-11-26 19:03:29.786821] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:38.649 [2024-11-26 19:03:29.786853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.649 [2024-11-26 19:03:29.801990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:38.649 spare 00:16:38.649 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.649 19:03:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:38.649 [2024-11-26 19:03:29.809439] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.583 "name": "raid_bdev1", 00:16:39.583 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:39.583 "strip_size_kb": 64, 00:16:39.583 "state": "online", 00:16:39.583 "raid_level": "raid5f", 00:16:39.583 "superblock": true, 00:16:39.583 "num_base_bdevs": 3, 00:16:39.583 "num_base_bdevs_discovered": 3, 00:16:39.583 "num_base_bdevs_operational": 3, 00:16:39.583 "process": { 00:16:39.583 "type": "rebuild", 00:16:39.583 "target": "spare", 00:16:39.583 "progress": { 00:16:39.583 "blocks": 18432, 00:16:39.583 "percent": 14 00:16:39.583 } 00:16:39.583 }, 00:16:39.583 "base_bdevs_list": [ 00:16:39.583 { 00:16:39.583 "name": "spare", 00:16:39.583 "uuid": "caee3c4b-3a6a-570c-8489-677115fb4b7b", 00:16:39.583 "is_configured": true, 00:16:39.583 "data_offset": 2048, 00:16:39.583 "data_size": 63488 00:16:39.583 }, 00:16:39.583 { 00:16:39.583 "name": "BaseBdev2", 00:16:39.583 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:39.583 "is_configured": true, 00:16:39.583 "data_offset": 2048, 00:16:39.583 "data_size": 63488 00:16:39.583 }, 00:16:39.583 { 00:16:39.583 "name": "BaseBdev3", 00:16:39.583 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:39.583 "is_configured": true, 00:16:39.583 "data_offset": 2048, 00:16:39.583 "data_size": 63488 00:16:39.583 } 00:16:39.583 ] 00:16:39.583 }' 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.583 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.842 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.842 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:39.842 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.842 19:03:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.842 [2024-11-26 19:03:30.995495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.842 [2024-11-26 19:03:31.023274] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:39.842 [2024-11-26 19:03:31.023356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.842 [2024-11-26 19:03:31.023384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.842 [2024-11-26 19:03:31.023395] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.842 "name": "raid_bdev1", 00:16:39.842 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:39.842 "strip_size_kb": 64, 00:16:39.842 "state": "online", 00:16:39.842 "raid_level": "raid5f", 00:16:39.842 "superblock": true, 00:16:39.842 "num_base_bdevs": 3, 00:16:39.842 "num_base_bdevs_discovered": 2, 00:16:39.842 "num_base_bdevs_operational": 2, 00:16:39.842 "base_bdevs_list": [ 00:16:39.842 { 00:16:39.842 "name": null, 00:16:39.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.842 "is_configured": false, 00:16:39.842 "data_offset": 0, 00:16:39.842 "data_size": 63488 00:16:39.842 }, 00:16:39.842 { 00:16:39.842 "name": "BaseBdev2", 00:16:39.842 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:39.842 "is_configured": true, 00:16:39.842 "data_offset": 2048, 00:16:39.842 "data_size": 63488 00:16:39.842 }, 00:16:39.842 { 00:16:39.842 "name": "BaseBdev3", 00:16:39.842 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:39.842 "is_configured": true, 00:16:39.842 "data_offset": 2048, 00:16:39.842 "data_size": 63488 00:16:39.842 } 00:16:39.842 ] 00:16:39.842 }' 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.842 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.423 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.423 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.423 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.423 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.423 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.423 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.423 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.423 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.424 "name": "raid_bdev1", 00:16:40.424 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:40.424 "strip_size_kb": 64, 00:16:40.424 "state": "online", 00:16:40.424 "raid_level": "raid5f", 00:16:40.424 "superblock": true, 00:16:40.424 "num_base_bdevs": 3, 00:16:40.424 "num_base_bdevs_discovered": 2, 00:16:40.424 "num_base_bdevs_operational": 2, 00:16:40.424 "base_bdevs_list": [ 00:16:40.424 { 00:16:40.424 "name": null, 00:16:40.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.424 "is_configured": false, 00:16:40.424 "data_offset": 0, 00:16:40.424 "data_size": 63488 00:16:40.424 }, 00:16:40.424 { 00:16:40.424 "name": "BaseBdev2", 00:16:40.424 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:40.424 "is_configured": true, 00:16:40.424 "data_offset": 2048, 00:16:40.424 "data_size": 63488 00:16:40.424 }, 00:16:40.424 { 00:16:40.424 "name": "BaseBdev3", 00:16:40.424 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:40.424 "is_configured": true, 00:16:40.424 "data_offset": 2048, 00:16:40.424 "data_size": 63488 00:16:40.424 } 00:16:40.424 ] 00:16:40.424 }' 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.424 [2024-11-26 19:03:31.766765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:40.424 [2024-11-26 19:03:31.766848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.424 [2024-11-26 19:03:31.766883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:40.424 [2024-11-26 19:03:31.766898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.424 [2024-11-26 19:03:31.767559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.424 [2024-11-26 19:03:31.767591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.424 [2024-11-26 19:03:31.767717] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:40.424 [2024-11-26 19:03:31.767739] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:40.424 [2024-11-26 19:03:31.767780] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:40.424 [2024-11-26 19:03:31.767795] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:40.424 BaseBdev1 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.424 19:03:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.799 "name": "raid_bdev1", 00:16:41.799 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:41.799 "strip_size_kb": 64, 00:16:41.799 "state": "online", 00:16:41.799 "raid_level": "raid5f", 00:16:41.799 "superblock": true, 00:16:41.799 "num_base_bdevs": 3, 00:16:41.799 "num_base_bdevs_discovered": 2, 00:16:41.799 "num_base_bdevs_operational": 2, 00:16:41.799 "base_bdevs_list": [ 00:16:41.799 { 00:16:41.799 "name": null, 00:16:41.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.799 "is_configured": false, 00:16:41.799 "data_offset": 0, 00:16:41.799 "data_size": 63488 00:16:41.799 }, 00:16:41.799 { 00:16:41.799 "name": "BaseBdev2", 00:16:41.799 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:41.799 "is_configured": true, 00:16:41.799 "data_offset": 2048, 00:16:41.799 "data_size": 63488 00:16:41.799 }, 00:16:41.799 { 00:16:41.799 "name": "BaseBdev3", 00:16:41.799 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:41.799 "is_configured": true, 00:16:41.799 "data_offset": 2048, 00:16:41.799 "data_size": 63488 00:16:41.799 } 00:16:41.799 ] 00:16:41.799 }' 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.799 19:03:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.059 "name": "raid_bdev1", 00:16:42.059 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:42.059 "strip_size_kb": 64, 00:16:42.059 "state": "online", 00:16:42.059 "raid_level": "raid5f", 00:16:42.059 "superblock": true, 00:16:42.059 "num_base_bdevs": 3, 00:16:42.059 "num_base_bdevs_discovered": 2, 00:16:42.059 "num_base_bdevs_operational": 2, 00:16:42.059 "base_bdevs_list": [ 00:16:42.059 { 00:16:42.059 "name": null, 00:16:42.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.059 "is_configured": false, 00:16:42.059 "data_offset": 0, 00:16:42.059 "data_size": 63488 00:16:42.059 }, 00:16:42.059 { 00:16:42.059 "name": "BaseBdev2", 00:16:42.059 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:42.059 "is_configured": true, 00:16:42.059 "data_offset": 2048, 00:16:42.059 "data_size": 63488 00:16:42.059 }, 00:16:42.059 { 00:16:42.059 "name": "BaseBdev3", 00:16:42.059 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:42.059 "is_configured": true, 00:16:42.059 "data_offset": 2048, 00:16:42.059 "data_size": 63488 00:16:42.059 } 00:16:42.059 ] 00:16:42.059 }' 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.059 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.318 [2024-11-26 19:03:33.447400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.318 [2024-11-26 19:03:33.447630] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:42.318 [2024-11-26 19:03:33.447654] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:42.318 request: 00:16:42.318 { 00:16:42.318 "base_bdev": "BaseBdev1", 00:16:42.318 "raid_bdev": "raid_bdev1", 00:16:42.318 "method": "bdev_raid_add_base_bdev", 00:16:42.318 "req_id": 1 00:16:42.318 } 00:16:42.318 Got JSON-RPC error response 00:16:42.318 response: 00:16:42.318 { 00:16:42.318 "code": -22, 00:16:42.318 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:42.318 } 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:42.318 19:03:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.253 "name": "raid_bdev1", 00:16:43.253 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:43.253 "strip_size_kb": 64, 00:16:43.253 "state": "online", 00:16:43.253 "raid_level": "raid5f", 00:16:43.253 "superblock": true, 00:16:43.253 "num_base_bdevs": 3, 00:16:43.253 "num_base_bdevs_discovered": 2, 00:16:43.253 "num_base_bdevs_operational": 2, 00:16:43.253 "base_bdevs_list": [ 00:16:43.253 { 00:16:43.253 "name": null, 00:16:43.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.253 "is_configured": false, 00:16:43.253 "data_offset": 0, 00:16:43.253 "data_size": 63488 00:16:43.253 }, 00:16:43.253 { 00:16:43.253 "name": "BaseBdev2", 00:16:43.253 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:43.253 "is_configured": true, 00:16:43.253 "data_offset": 2048, 00:16:43.253 "data_size": 63488 00:16:43.253 }, 00:16:43.253 { 00:16:43.253 "name": "BaseBdev3", 00:16:43.253 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:43.253 "is_configured": true, 00:16:43.253 "data_offset": 2048, 00:16:43.253 "data_size": 63488 00:16:43.253 } 00:16:43.253 ] 00:16:43.253 }' 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.253 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.822 19:03:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.822 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.822 "name": "raid_bdev1", 00:16:43.822 "uuid": "336b7306-c122-4148-b11d-f24180170337", 00:16:43.822 "strip_size_kb": 64, 00:16:43.822 "state": "online", 00:16:43.822 "raid_level": "raid5f", 00:16:43.822 "superblock": true, 00:16:43.822 "num_base_bdevs": 3, 00:16:43.822 "num_base_bdevs_discovered": 2, 00:16:43.822 "num_base_bdevs_operational": 2, 00:16:43.822 "base_bdevs_list": [ 00:16:43.822 { 00:16:43.822 "name": null, 00:16:43.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.822 "is_configured": false, 00:16:43.822 "data_offset": 0, 00:16:43.822 "data_size": 63488 00:16:43.823 }, 00:16:43.823 { 00:16:43.823 "name": "BaseBdev2", 00:16:43.823 "uuid": "1460f304-a00f-5d9f-b02b-aae2cb062ade", 00:16:43.823 "is_configured": true, 00:16:43.823 "data_offset": 2048, 00:16:43.823 "data_size": 63488 00:16:43.823 }, 00:16:43.823 { 00:16:43.823 "name": "BaseBdev3", 00:16:43.823 "uuid": "898c4f0d-6db7-51be-aa7f-06dd69381140", 00:16:43.823 "is_configured": true, 00:16:43.823 "data_offset": 2048, 00:16:43.823 "data_size": 63488 00:16:43.823 } 00:16:43.823 ] 00:16:43.823 }' 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82445 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82445 ']' 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82445 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82445 00:16:43.823 killing process with pid 82445 00:16:43.823 Received shutdown signal, test time was about 60.000000 seconds 00:16:43.823 00:16:43.823 Latency(us) 00:16:43.823 [2024-11-26T19:03:35.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.823 [2024-11-26T19:03:35.190Z] =================================================================================================================== 00:16:43.823 [2024-11-26T19:03:35.190Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82445' 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82445 00:16:43.823 [2024-11-26 19:03:35.149412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.823 19:03:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82445 00:16:43.823 [2024-11-26 19:03:35.149556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.823 [2024-11-26 19:03:35.149637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.823 [2024-11-26 19:03:35.149671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:44.391 [2024-11-26 19:03:35.483610] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.328 ************************************ 00:16:45.328 END TEST raid5f_rebuild_test_sb 00:16:45.328 ************************************ 00:16:45.328 19:03:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:45.328 00:16:45.328 real 0m24.909s 00:16:45.328 user 0m33.352s 00:16:45.328 sys 0m2.620s 00:16:45.328 19:03:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.328 19:03:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.328 19:03:36 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:45.328 19:03:36 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:45.328 19:03:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:45.328 19:03:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.328 19:03:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.328 ************************************ 00:16:45.328 START TEST raid5f_state_function_test 00:16:45.328 ************************************ 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:45.328 Process raid pid: 83212 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83212 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83212' 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83212 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83212 ']' 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.328 19:03:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.328 [2024-11-26 19:03:36.675445] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:16:45.328 [2024-11-26 19:03:36.675637] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.587 [2024-11-26 19:03:36.865341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.846 [2024-11-26 19:03:36.986571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.846 [2024-11-26 19:03:37.199296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.846 [2024-11-26 19:03:37.199345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.413 19:03:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.413 19:03:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:46.413 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:46.413 19:03:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.413 19:03:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.413 [2024-11-26 19:03:37.705596] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.413 [2024-11-26 19:03:37.705674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.413 [2024-11-26 19:03:37.705691] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.413 [2024-11-26 19:03:37.705707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.414 [2024-11-26 19:03:37.705717] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.414 [2024-11-26 19:03:37.705731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.414 [2024-11-26 19:03:37.705740] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:46.414 [2024-11-26 19:03:37.705754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.414 "name": "Existed_Raid", 00:16:46.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.414 "strip_size_kb": 64, 00:16:46.414 "state": "configuring", 00:16:46.414 "raid_level": "raid5f", 00:16:46.414 "superblock": false, 00:16:46.414 "num_base_bdevs": 4, 00:16:46.414 "num_base_bdevs_discovered": 0, 00:16:46.414 "num_base_bdevs_operational": 4, 00:16:46.414 "base_bdevs_list": [ 00:16:46.414 { 00:16:46.414 "name": "BaseBdev1", 00:16:46.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.414 "is_configured": false, 00:16:46.414 "data_offset": 0, 00:16:46.414 "data_size": 0 00:16:46.414 }, 00:16:46.414 { 00:16:46.414 "name": "BaseBdev2", 00:16:46.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.414 "is_configured": false, 00:16:46.414 "data_offset": 0, 00:16:46.414 "data_size": 0 00:16:46.414 }, 00:16:46.414 { 00:16:46.414 "name": "BaseBdev3", 00:16:46.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.414 "is_configured": false, 00:16:46.414 "data_offset": 0, 00:16:46.414 "data_size": 0 00:16:46.414 }, 00:16:46.414 { 00:16:46.414 "name": "BaseBdev4", 00:16:46.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.414 "is_configured": false, 00:16:46.414 "data_offset": 0, 00:16:46.414 "data_size": 0 00:16:46.414 } 00:16:46.414 ] 00:16:46.414 }' 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.414 19:03:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.982 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.982 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.982 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.982 [2024-11-26 19:03:38.197695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.983 [2024-11-26 19:03:38.197757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.983 [2024-11-26 19:03:38.205659] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.983 [2024-11-26 19:03:38.205725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.983 [2024-11-26 19:03:38.205740] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.983 [2024-11-26 19:03:38.205757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.983 [2024-11-26 19:03:38.205767] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.983 [2024-11-26 19:03:38.205781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.983 [2024-11-26 19:03:38.205802] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:46.983 [2024-11-26 19:03:38.205816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.983 [2024-11-26 19:03:38.249215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.983 BaseBdev1 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.983 [ 00:16:46.983 { 00:16:46.983 "name": "BaseBdev1", 00:16:46.983 "aliases": [ 00:16:46.983 "c3d1eefd-1a2e-4161-a1b6-2b54e1b9d0b2" 00:16:46.983 ], 00:16:46.983 "product_name": "Malloc disk", 00:16:46.983 "block_size": 512, 00:16:46.983 "num_blocks": 65536, 00:16:46.983 "uuid": "c3d1eefd-1a2e-4161-a1b6-2b54e1b9d0b2", 00:16:46.983 "assigned_rate_limits": { 00:16:46.983 "rw_ios_per_sec": 0, 00:16:46.983 "rw_mbytes_per_sec": 0, 00:16:46.983 "r_mbytes_per_sec": 0, 00:16:46.983 "w_mbytes_per_sec": 0 00:16:46.983 }, 00:16:46.983 "claimed": true, 00:16:46.983 "claim_type": "exclusive_write", 00:16:46.983 "zoned": false, 00:16:46.983 "supported_io_types": { 00:16:46.983 "read": true, 00:16:46.983 "write": true, 00:16:46.983 "unmap": true, 00:16:46.983 "flush": true, 00:16:46.983 "reset": true, 00:16:46.983 "nvme_admin": false, 00:16:46.983 "nvme_io": false, 00:16:46.983 "nvme_io_md": false, 00:16:46.983 "write_zeroes": true, 00:16:46.983 "zcopy": true, 00:16:46.983 "get_zone_info": false, 00:16:46.983 "zone_management": false, 00:16:46.983 "zone_append": false, 00:16:46.983 "compare": false, 00:16:46.983 "compare_and_write": false, 00:16:46.983 "abort": true, 00:16:46.983 "seek_hole": false, 00:16:46.983 "seek_data": false, 00:16:46.983 "copy": true, 00:16:46.983 "nvme_iov_md": false 00:16:46.983 }, 00:16:46.983 "memory_domains": [ 00:16:46.983 { 00:16:46.983 "dma_device_id": "system", 00:16:46.983 "dma_device_type": 1 00:16:46.983 }, 00:16:46.983 { 00:16:46.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.983 "dma_device_type": 2 00:16:46.983 } 00:16:46.983 ], 00:16:46.983 "driver_specific": {} 00:16:46.983 } 00:16:46.983 ] 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.983 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.983 "name": "Existed_Raid", 00:16:46.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.983 "strip_size_kb": 64, 00:16:46.983 "state": "configuring", 00:16:46.983 "raid_level": "raid5f", 00:16:46.983 "superblock": false, 00:16:46.983 "num_base_bdevs": 4, 00:16:46.983 "num_base_bdevs_discovered": 1, 00:16:46.983 "num_base_bdevs_operational": 4, 00:16:46.983 "base_bdevs_list": [ 00:16:46.983 { 00:16:46.983 "name": "BaseBdev1", 00:16:46.983 "uuid": "c3d1eefd-1a2e-4161-a1b6-2b54e1b9d0b2", 00:16:46.983 "is_configured": true, 00:16:46.984 "data_offset": 0, 00:16:46.984 "data_size": 65536 00:16:46.984 }, 00:16:46.984 { 00:16:46.984 "name": "BaseBdev2", 00:16:46.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.984 "is_configured": false, 00:16:46.984 "data_offset": 0, 00:16:46.984 "data_size": 0 00:16:46.984 }, 00:16:46.984 { 00:16:46.984 "name": "BaseBdev3", 00:16:46.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.984 "is_configured": false, 00:16:46.984 "data_offset": 0, 00:16:46.984 "data_size": 0 00:16:46.984 }, 00:16:46.984 { 00:16:46.984 "name": "BaseBdev4", 00:16:46.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.984 "is_configured": false, 00:16:46.984 "data_offset": 0, 00:16:46.984 "data_size": 0 00:16:46.984 } 00:16:46.984 ] 00:16:46.984 }' 00:16:46.984 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.984 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.554 [2024-11-26 19:03:38.797443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.554 [2024-11-26 19:03:38.797513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.554 [2024-11-26 19:03:38.805473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.554 [2024-11-26 19:03:38.808233] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.554 [2024-11-26 19:03:38.808341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.554 [2024-11-26 19:03:38.808379] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.554 [2024-11-26 19:03:38.808414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.554 [2024-11-26 19:03:38.808436] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:47.554 [2024-11-26 19:03:38.808463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.554 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.554 "name": "Existed_Raid", 00:16:47.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.554 "strip_size_kb": 64, 00:16:47.554 "state": "configuring", 00:16:47.554 "raid_level": "raid5f", 00:16:47.554 "superblock": false, 00:16:47.554 "num_base_bdevs": 4, 00:16:47.554 "num_base_bdevs_discovered": 1, 00:16:47.554 "num_base_bdevs_operational": 4, 00:16:47.554 "base_bdevs_list": [ 00:16:47.554 { 00:16:47.554 "name": "BaseBdev1", 00:16:47.554 "uuid": "c3d1eefd-1a2e-4161-a1b6-2b54e1b9d0b2", 00:16:47.554 "is_configured": true, 00:16:47.554 "data_offset": 0, 00:16:47.554 "data_size": 65536 00:16:47.554 }, 00:16:47.554 { 00:16:47.554 "name": "BaseBdev2", 00:16:47.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.554 "is_configured": false, 00:16:47.554 "data_offset": 0, 00:16:47.554 "data_size": 0 00:16:47.554 }, 00:16:47.554 { 00:16:47.555 "name": "BaseBdev3", 00:16:47.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.555 "is_configured": false, 00:16:47.555 "data_offset": 0, 00:16:47.555 "data_size": 0 00:16:47.555 }, 00:16:47.555 { 00:16:47.555 "name": "BaseBdev4", 00:16:47.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.555 "is_configured": false, 00:16:47.555 "data_offset": 0, 00:16:47.555 "data_size": 0 00:16:47.555 } 00:16:47.555 ] 00:16:47.555 }' 00:16:47.555 19:03:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.555 19:03:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.123 [2024-11-26 19:03:39.361942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.123 BaseBdev2 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.123 [ 00:16:48.123 { 00:16:48.123 "name": "BaseBdev2", 00:16:48.123 "aliases": [ 00:16:48.123 "4d92f282-8b60-4a96-aeec-d0034f3ca418" 00:16:48.123 ], 00:16:48.123 "product_name": "Malloc disk", 00:16:48.123 "block_size": 512, 00:16:48.123 "num_blocks": 65536, 00:16:48.123 "uuid": "4d92f282-8b60-4a96-aeec-d0034f3ca418", 00:16:48.123 "assigned_rate_limits": { 00:16:48.123 "rw_ios_per_sec": 0, 00:16:48.123 "rw_mbytes_per_sec": 0, 00:16:48.123 "r_mbytes_per_sec": 0, 00:16:48.123 "w_mbytes_per_sec": 0 00:16:48.123 }, 00:16:48.123 "claimed": true, 00:16:48.123 "claim_type": "exclusive_write", 00:16:48.123 "zoned": false, 00:16:48.123 "supported_io_types": { 00:16:48.123 "read": true, 00:16:48.123 "write": true, 00:16:48.123 "unmap": true, 00:16:48.123 "flush": true, 00:16:48.123 "reset": true, 00:16:48.123 "nvme_admin": false, 00:16:48.123 "nvme_io": false, 00:16:48.123 "nvme_io_md": false, 00:16:48.123 "write_zeroes": true, 00:16:48.123 "zcopy": true, 00:16:48.123 "get_zone_info": false, 00:16:48.123 "zone_management": false, 00:16:48.123 "zone_append": false, 00:16:48.123 "compare": false, 00:16:48.123 "compare_and_write": false, 00:16:48.123 "abort": true, 00:16:48.123 "seek_hole": false, 00:16:48.123 "seek_data": false, 00:16:48.123 "copy": true, 00:16:48.123 "nvme_iov_md": false 00:16:48.123 }, 00:16:48.123 "memory_domains": [ 00:16:48.123 { 00:16:48.123 "dma_device_id": "system", 00:16:48.123 "dma_device_type": 1 00:16:48.123 }, 00:16:48.123 { 00:16:48.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.123 "dma_device_type": 2 00:16:48.123 } 00:16:48.123 ], 00:16:48.123 "driver_specific": {} 00:16:48.123 } 00:16:48.123 ] 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.123 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.124 "name": "Existed_Raid", 00:16:48.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.124 "strip_size_kb": 64, 00:16:48.124 "state": "configuring", 00:16:48.124 "raid_level": "raid5f", 00:16:48.124 "superblock": false, 00:16:48.124 "num_base_bdevs": 4, 00:16:48.124 "num_base_bdevs_discovered": 2, 00:16:48.124 "num_base_bdevs_operational": 4, 00:16:48.124 "base_bdevs_list": [ 00:16:48.124 { 00:16:48.124 "name": "BaseBdev1", 00:16:48.124 "uuid": "c3d1eefd-1a2e-4161-a1b6-2b54e1b9d0b2", 00:16:48.124 "is_configured": true, 00:16:48.124 "data_offset": 0, 00:16:48.124 "data_size": 65536 00:16:48.124 }, 00:16:48.124 { 00:16:48.124 "name": "BaseBdev2", 00:16:48.124 "uuid": "4d92f282-8b60-4a96-aeec-d0034f3ca418", 00:16:48.124 "is_configured": true, 00:16:48.124 "data_offset": 0, 00:16:48.124 "data_size": 65536 00:16:48.124 }, 00:16:48.124 { 00:16:48.124 "name": "BaseBdev3", 00:16:48.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.124 "is_configured": false, 00:16:48.124 "data_offset": 0, 00:16:48.124 "data_size": 0 00:16:48.124 }, 00:16:48.124 { 00:16:48.124 "name": "BaseBdev4", 00:16:48.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.124 "is_configured": false, 00:16:48.124 "data_offset": 0, 00:16:48.124 "data_size": 0 00:16:48.124 } 00:16:48.124 ] 00:16:48.124 }' 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.124 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.692 [2024-11-26 19:03:39.980601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.692 BaseBdev3 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.692 19:03:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.692 [ 00:16:48.692 { 00:16:48.692 "name": "BaseBdev3", 00:16:48.692 "aliases": [ 00:16:48.692 "2b66c37e-8262-4bfd-b548-6bac82c67749" 00:16:48.692 ], 00:16:48.693 "product_name": "Malloc disk", 00:16:48.693 "block_size": 512, 00:16:48.693 "num_blocks": 65536, 00:16:48.693 "uuid": "2b66c37e-8262-4bfd-b548-6bac82c67749", 00:16:48.693 "assigned_rate_limits": { 00:16:48.693 "rw_ios_per_sec": 0, 00:16:48.693 "rw_mbytes_per_sec": 0, 00:16:48.693 "r_mbytes_per_sec": 0, 00:16:48.693 "w_mbytes_per_sec": 0 00:16:48.693 }, 00:16:48.693 "claimed": true, 00:16:48.693 "claim_type": "exclusive_write", 00:16:48.693 "zoned": false, 00:16:48.693 "supported_io_types": { 00:16:48.693 "read": true, 00:16:48.693 "write": true, 00:16:48.693 "unmap": true, 00:16:48.693 "flush": true, 00:16:48.693 "reset": true, 00:16:48.693 "nvme_admin": false, 00:16:48.693 "nvme_io": false, 00:16:48.693 "nvme_io_md": false, 00:16:48.693 "write_zeroes": true, 00:16:48.693 "zcopy": true, 00:16:48.693 "get_zone_info": false, 00:16:48.693 "zone_management": false, 00:16:48.693 "zone_append": false, 00:16:48.693 "compare": false, 00:16:48.693 "compare_and_write": false, 00:16:48.693 "abort": true, 00:16:48.693 "seek_hole": false, 00:16:48.693 "seek_data": false, 00:16:48.693 "copy": true, 00:16:48.693 "nvme_iov_md": false 00:16:48.693 }, 00:16:48.693 "memory_domains": [ 00:16:48.693 { 00:16:48.693 "dma_device_id": "system", 00:16:48.693 "dma_device_type": 1 00:16:48.693 }, 00:16:48.693 { 00:16:48.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.693 "dma_device_type": 2 00:16:48.693 } 00:16:48.693 ], 00:16:48.693 "driver_specific": {} 00:16:48.693 } 00:16:48.693 ] 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.693 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.952 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.952 "name": "Existed_Raid", 00:16:48.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.952 "strip_size_kb": 64, 00:16:48.952 "state": "configuring", 00:16:48.952 "raid_level": "raid5f", 00:16:48.952 "superblock": false, 00:16:48.952 "num_base_bdevs": 4, 00:16:48.952 "num_base_bdevs_discovered": 3, 00:16:48.952 "num_base_bdevs_operational": 4, 00:16:48.952 "base_bdevs_list": [ 00:16:48.952 { 00:16:48.952 "name": "BaseBdev1", 00:16:48.952 "uuid": "c3d1eefd-1a2e-4161-a1b6-2b54e1b9d0b2", 00:16:48.952 "is_configured": true, 00:16:48.952 "data_offset": 0, 00:16:48.952 "data_size": 65536 00:16:48.952 }, 00:16:48.952 { 00:16:48.952 "name": "BaseBdev2", 00:16:48.952 "uuid": "4d92f282-8b60-4a96-aeec-d0034f3ca418", 00:16:48.952 "is_configured": true, 00:16:48.952 "data_offset": 0, 00:16:48.952 "data_size": 65536 00:16:48.952 }, 00:16:48.952 { 00:16:48.952 "name": "BaseBdev3", 00:16:48.952 "uuid": "2b66c37e-8262-4bfd-b548-6bac82c67749", 00:16:48.952 "is_configured": true, 00:16:48.952 "data_offset": 0, 00:16:48.952 "data_size": 65536 00:16:48.952 }, 00:16:48.952 { 00:16:48.952 "name": "BaseBdev4", 00:16:48.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.952 "is_configured": false, 00:16:48.952 "data_offset": 0, 00:16:48.952 "data_size": 0 00:16:48.952 } 00:16:48.952 ] 00:16:48.952 }' 00:16:48.952 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.952 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.212 [2024-11-26 19:03:40.556977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:49.212 [2024-11-26 19:03:40.557049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:49.212 [2024-11-26 19:03:40.557064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:49.212 [2024-11-26 19:03:40.557417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:49.212 [2024-11-26 19:03:40.563697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:49.212 [2024-11-26 19:03:40.563725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:49.212 [2024-11-26 19:03:40.564124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.212 BaseBdev4 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.212 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.471 [ 00:16:49.471 { 00:16:49.471 "name": "BaseBdev4", 00:16:49.471 "aliases": [ 00:16:49.471 "8a7cf36c-f1b8-492a-bdb4-a14e100ec018" 00:16:49.471 ], 00:16:49.471 "product_name": "Malloc disk", 00:16:49.471 "block_size": 512, 00:16:49.471 "num_blocks": 65536, 00:16:49.471 "uuid": "8a7cf36c-f1b8-492a-bdb4-a14e100ec018", 00:16:49.471 "assigned_rate_limits": { 00:16:49.471 "rw_ios_per_sec": 0, 00:16:49.471 "rw_mbytes_per_sec": 0, 00:16:49.471 "r_mbytes_per_sec": 0, 00:16:49.471 "w_mbytes_per_sec": 0 00:16:49.471 }, 00:16:49.471 "claimed": true, 00:16:49.471 "claim_type": "exclusive_write", 00:16:49.471 "zoned": false, 00:16:49.471 "supported_io_types": { 00:16:49.471 "read": true, 00:16:49.471 "write": true, 00:16:49.471 "unmap": true, 00:16:49.471 "flush": true, 00:16:49.471 "reset": true, 00:16:49.471 "nvme_admin": false, 00:16:49.471 "nvme_io": false, 00:16:49.471 "nvme_io_md": false, 00:16:49.471 "write_zeroes": true, 00:16:49.471 "zcopy": true, 00:16:49.471 "get_zone_info": false, 00:16:49.471 "zone_management": false, 00:16:49.471 "zone_append": false, 00:16:49.471 "compare": false, 00:16:49.471 "compare_and_write": false, 00:16:49.471 "abort": true, 00:16:49.471 "seek_hole": false, 00:16:49.471 "seek_data": false, 00:16:49.471 "copy": true, 00:16:49.471 "nvme_iov_md": false 00:16:49.471 }, 00:16:49.471 "memory_domains": [ 00:16:49.471 { 00:16:49.471 "dma_device_id": "system", 00:16:49.471 "dma_device_type": 1 00:16:49.471 }, 00:16:49.471 { 00:16:49.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.471 "dma_device_type": 2 00:16:49.471 } 00:16:49.471 ], 00:16:49.471 "driver_specific": {} 00:16:49.471 } 00:16:49.471 ] 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.471 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.471 "name": "Existed_Raid", 00:16:49.471 "uuid": "88db7563-1133-4f7f-a566-0da17b307917", 00:16:49.471 "strip_size_kb": 64, 00:16:49.471 "state": "online", 00:16:49.471 "raid_level": "raid5f", 00:16:49.471 "superblock": false, 00:16:49.472 "num_base_bdevs": 4, 00:16:49.472 "num_base_bdevs_discovered": 4, 00:16:49.472 "num_base_bdevs_operational": 4, 00:16:49.472 "base_bdevs_list": [ 00:16:49.472 { 00:16:49.472 "name": "BaseBdev1", 00:16:49.472 "uuid": "c3d1eefd-1a2e-4161-a1b6-2b54e1b9d0b2", 00:16:49.472 "is_configured": true, 00:16:49.472 "data_offset": 0, 00:16:49.472 "data_size": 65536 00:16:49.472 }, 00:16:49.472 { 00:16:49.472 "name": "BaseBdev2", 00:16:49.472 "uuid": "4d92f282-8b60-4a96-aeec-d0034f3ca418", 00:16:49.472 "is_configured": true, 00:16:49.472 "data_offset": 0, 00:16:49.472 "data_size": 65536 00:16:49.472 }, 00:16:49.472 { 00:16:49.472 "name": "BaseBdev3", 00:16:49.472 "uuid": "2b66c37e-8262-4bfd-b548-6bac82c67749", 00:16:49.472 "is_configured": true, 00:16:49.472 "data_offset": 0, 00:16:49.472 "data_size": 65536 00:16:49.472 }, 00:16:49.472 { 00:16:49.472 "name": "BaseBdev4", 00:16:49.472 "uuid": "8a7cf36c-f1b8-492a-bdb4-a14e100ec018", 00:16:49.472 "is_configured": true, 00:16:49.472 "data_offset": 0, 00:16:49.472 "data_size": 65536 00:16:49.472 } 00:16:49.472 ] 00:16:49.472 }' 00:16:49.472 19:03:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.472 19:03:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.093 [2024-11-26 19:03:41.111687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.093 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.093 "name": "Existed_Raid", 00:16:50.093 "aliases": [ 00:16:50.093 "88db7563-1133-4f7f-a566-0da17b307917" 00:16:50.093 ], 00:16:50.093 "product_name": "Raid Volume", 00:16:50.093 "block_size": 512, 00:16:50.093 "num_blocks": 196608, 00:16:50.093 "uuid": "88db7563-1133-4f7f-a566-0da17b307917", 00:16:50.093 "assigned_rate_limits": { 00:16:50.093 "rw_ios_per_sec": 0, 00:16:50.093 "rw_mbytes_per_sec": 0, 00:16:50.093 "r_mbytes_per_sec": 0, 00:16:50.093 "w_mbytes_per_sec": 0 00:16:50.093 }, 00:16:50.093 "claimed": false, 00:16:50.093 "zoned": false, 00:16:50.093 "supported_io_types": { 00:16:50.093 "read": true, 00:16:50.093 "write": true, 00:16:50.093 "unmap": false, 00:16:50.093 "flush": false, 00:16:50.093 "reset": true, 00:16:50.093 "nvme_admin": false, 00:16:50.093 "nvme_io": false, 00:16:50.093 "nvme_io_md": false, 00:16:50.093 "write_zeroes": true, 00:16:50.093 "zcopy": false, 00:16:50.093 "get_zone_info": false, 00:16:50.093 "zone_management": false, 00:16:50.093 "zone_append": false, 00:16:50.093 "compare": false, 00:16:50.093 "compare_and_write": false, 00:16:50.093 "abort": false, 00:16:50.093 "seek_hole": false, 00:16:50.093 "seek_data": false, 00:16:50.093 "copy": false, 00:16:50.093 "nvme_iov_md": false 00:16:50.093 }, 00:16:50.093 "driver_specific": { 00:16:50.093 "raid": { 00:16:50.093 "uuid": "88db7563-1133-4f7f-a566-0da17b307917", 00:16:50.093 "strip_size_kb": 64, 00:16:50.093 "state": "online", 00:16:50.093 "raid_level": "raid5f", 00:16:50.094 "superblock": false, 00:16:50.094 "num_base_bdevs": 4, 00:16:50.094 "num_base_bdevs_discovered": 4, 00:16:50.094 "num_base_bdevs_operational": 4, 00:16:50.094 "base_bdevs_list": [ 00:16:50.094 { 00:16:50.094 "name": "BaseBdev1", 00:16:50.094 "uuid": "c3d1eefd-1a2e-4161-a1b6-2b54e1b9d0b2", 00:16:50.094 "is_configured": true, 00:16:50.094 "data_offset": 0, 00:16:50.094 "data_size": 65536 00:16:50.094 }, 00:16:50.094 { 00:16:50.094 "name": "BaseBdev2", 00:16:50.094 "uuid": "4d92f282-8b60-4a96-aeec-d0034f3ca418", 00:16:50.094 "is_configured": true, 00:16:50.094 "data_offset": 0, 00:16:50.094 "data_size": 65536 00:16:50.094 }, 00:16:50.094 { 00:16:50.094 "name": "BaseBdev3", 00:16:50.094 "uuid": "2b66c37e-8262-4bfd-b548-6bac82c67749", 00:16:50.094 "is_configured": true, 00:16:50.094 "data_offset": 0, 00:16:50.094 "data_size": 65536 00:16:50.094 }, 00:16:50.094 { 00:16:50.094 "name": "BaseBdev4", 00:16:50.094 "uuid": "8a7cf36c-f1b8-492a-bdb4-a14e100ec018", 00:16:50.094 "is_configured": true, 00:16:50.094 "data_offset": 0, 00:16:50.094 "data_size": 65536 00:16:50.094 } 00:16:50.094 ] 00:16:50.094 } 00:16:50.094 } 00:16:50.094 }' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:50.094 BaseBdev2 00:16:50.094 BaseBdev3 00:16:50.094 BaseBdev4' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.094 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.353 [2024-11-26 19:03:41.483572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.353 "name": "Existed_Raid", 00:16:50.353 "uuid": "88db7563-1133-4f7f-a566-0da17b307917", 00:16:50.353 "strip_size_kb": 64, 00:16:50.353 "state": "online", 00:16:50.353 "raid_level": "raid5f", 00:16:50.353 "superblock": false, 00:16:50.353 "num_base_bdevs": 4, 00:16:50.353 "num_base_bdevs_discovered": 3, 00:16:50.353 "num_base_bdevs_operational": 3, 00:16:50.353 "base_bdevs_list": [ 00:16:50.353 { 00:16:50.353 "name": null, 00:16:50.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.353 "is_configured": false, 00:16:50.353 "data_offset": 0, 00:16:50.353 "data_size": 65536 00:16:50.353 }, 00:16:50.353 { 00:16:50.353 "name": "BaseBdev2", 00:16:50.353 "uuid": "4d92f282-8b60-4a96-aeec-d0034f3ca418", 00:16:50.353 "is_configured": true, 00:16:50.353 "data_offset": 0, 00:16:50.353 "data_size": 65536 00:16:50.353 }, 00:16:50.353 { 00:16:50.353 "name": "BaseBdev3", 00:16:50.353 "uuid": "2b66c37e-8262-4bfd-b548-6bac82c67749", 00:16:50.353 "is_configured": true, 00:16:50.353 "data_offset": 0, 00:16:50.353 "data_size": 65536 00:16:50.353 }, 00:16:50.353 { 00:16:50.353 "name": "BaseBdev4", 00:16:50.353 "uuid": "8a7cf36c-f1b8-492a-bdb4-a14e100ec018", 00:16:50.353 "is_configured": true, 00:16:50.353 "data_offset": 0, 00:16:50.353 "data_size": 65536 00:16:50.353 } 00:16:50.353 ] 00:16:50.353 }' 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.353 19:03:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.920 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:50.920 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:50.920 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.920 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.921 [2024-11-26 19:03:42.125734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:50.921 [2024-11-26 19:03:42.125852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.921 [2024-11-26 19:03:42.213315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.921 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.921 [2024-11-26 19:03:42.277433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.181 [2024-11-26 19:03:42.425598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:51.181 [2024-11-26 19:03:42.425825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.181 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.441 BaseBdev2 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.441 [ 00:16:51.441 { 00:16:51.441 "name": "BaseBdev2", 00:16:51.441 "aliases": [ 00:16:51.441 "78cb9928-9fcd-420a-9629-42750aa70431" 00:16:51.441 ], 00:16:51.441 "product_name": "Malloc disk", 00:16:51.441 "block_size": 512, 00:16:51.441 "num_blocks": 65536, 00:16:51.441 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:51.441 "assigned_rate_limits": { 00:16:51.441 "rw_ios_per_sec": 0, 00:16:51.441 "rw_mbytes_per_sec": 0, 00:16:51.441 "r_mbytes_per_sec": 0, 00:16:51.441 "w_mbytes_per_sec": 0 00:16:51.441 }, 00:16:51.441 "claimed": false, 00:16:51.441 "zoned": false, 00:16:51.441 "supported_io_types": { 00:16:51.441 "read": true, 00:16:51.441 "write": true, 00:16:51.441 "unmap": true, 00:16:51.441 "flush": true, 00:16:51.441 "reset": true, 00:16:51.441 "nvme_admin": false, 00:16:51.441 "nvme_io": false, 00:16:51.441 "nvme_io_md": false, 00:16:51.441 "write_zeroes": true, 00:16:51.441 "zcopy": true, 00:16:51.441 "get_zone_info": false, 00:16:51.441 "zone_management": false, 00:16:51.441 "zone_append": false, 00:16:51.441 "compare": false, 00:16:51.441 "compare_and_write": false, 00:16:51.441 "abort": true, 00:16:51.441 "seek_hole": false, 00:16:51.441 "seek_data": false, 00:16:51.441 "copy": true, 00:16:51.441 "nvme_iov_md": false 00:16:51.441 }, 00:16:51.441 "memory_domains": [ 00:16:51.441 { 00:16:51.441 "dma_device_id": "system", 00:16:51.441 "dma_device_type": 1 00:16:51.441 }, 00:16:51.441 { 00:16:51.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.441 "dma_device_type": 2 00:16:51.441 } 00:16:51.441 ], 00:16:51.441 "driver_specific": {} 00:16:51.441 } 00:16:51.441 ] 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.441 BaseBdev3 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.441 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.441 [ 00:16:51.441 { 00:16:51.441 "name": "BaseBdev3", 00:16:51.441 "aliases": [ 00:16:51.441 "294bc6c4-811e-4bc9-a92f-e716578fb5c8" 00:16:51.441 ], 00:16:51.441 "product_name": "Malloc disk", 00:16:51.441 "block_size": 512, 00:16:51.441 "num_blocks": 65536, 00:16:51.441 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:51.441 "assigned_rate_limits": { 00:16:51.441 "rw_ios_per_sec": 0, 00:16:51.441 "rw_mbytes_per_sec": 0, 00:16:51.441 "r_mbytes_per_sec": 0, 00:16:51.441 "w_mbytes_per_sec": 0 00:16:51.441 }, 00:16:51.441 "claimed": false, 00:16:51.441 "zoned": false, 00:16:51.441 "supported_io_types": { 00:16:51.441 "read": true, 00:16:51.441 "write": true, 00:16:51.441 "unmap": true, 00:16:51.441 "flush": true, 00:16:51.441 "reset": true, 00:16:51.441 "nvme_admin": false, 00:16:51.441 "nvme_io": false, 00:16:51.441 "nvme_io_md": false, 00:16:51.442 "write_zeroes": true, 00:16:51.442 "zcopy": true, 00:16:51.442 "get_zone_info": false, 00:16:51.442 "zone_management": false, 00:16:51.442 "zone_append": false, 00:16:51.442 "compare": false, 00:16:51.442 "compare_and_write": false, 00:16:51.442 "abort": true, 00:16:51.442 "seek_hole": false, 00:16:51.442 "seek_data": false, 00:16:51.442 "copy": true, 00:16:51.442 "nvme_iov_md": false 00:16:51.442 }, 00:16:51.442 "memory_domains": [ 00:16:51.442 { 00:16:51.442 "dma_device_id": "system", 00:16:51.442 "dma_device_type": 1 00:16:51.442 }, 00:16:51.442 { 00:16:51.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.442 "dma_device_type": 2 00:16:51.442 } 00:16:51.442 ], 00:16:51.442 "driver_specific": {} 00:16:51.442 } 00:16:51.442 ] 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.442 BaseBdev4 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.442 [ 00:16:51.442 { 00:16:51.442 "name": "BaseBdev4", 00:16:51.442 "aliases": [ 00:16:51.442 "95de3f54-d4a0-4758-809f-e8b08b41d237" 00:16:51.442 ], 00:16:51.442 "product_name": "Malloc disk", 00:16:51.442 "block_size": 512, 00:16:51.442 "num_blocks": 65536, 00:16:51.442 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:51.442 "assigned_rate_limits": { 00:16:51.442 "rw_ios_per_sec": 0, 00:16:51.442 "rw_mbytes_per_sec": 0, 00:16:51.442 "r_mbytes_per_sec": 0, 00:16:51.442 "w_mbytes_per_sec": 0 00:16:51.442 }, 00:16:51.442 "claimed": false, 00:16:51.442 "zoned": false, 00:16:51.442 "supported_io_types": { 00:16:51.442 "read": true, 00:16:51.442 "write": true, 00:16:51.442 "unmap": true, 00:16:51.442 "flush": true, 00:16:51.442 "reset": true, 00:16:51.442 "nvme_admin": false, 00:16:51.442 "nvme_io": false, 00:16:51.442 "nvme_io_md": false, 00:16:51.442 "write_zeroes": true, 00:16:51.442 "zcopy": true, 00:16:51.442 "get_zone_info": false, 00:16:51.442 "zone_management": false, 00:16:51.442 "zone_append": false, 00:16:51.442 "compare": false, 00:16:51.442 "compare_and_write": false, 00:16:51.442 "abort": true, 00:16:51.442 "seek_hole": false, 00:16:51.442 "seek_data": false, 00:16:51.442 "copy": true, 00:16:51.442 "nvme_iov_md": false 00:16:51.442 }, 00:16:51.442 "memory_domains": [ 00:16:51.442 { 00:16:51.442 "dma_device_id": "system", 00:16:51.442 "dma_device_type": 1 00:16:51.442 }, 00:16:51.442 { 00:16:51.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.442 "dma_device_type": 2 00:16:51.442 } 00:16:51.442 ], 00:16:51.442 "driver_specific": {} 00:16:51.442 } 00:16:51.442 ] 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.442 [2024-11-26 19:03:42.789475] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.442 [2024-11-26 19:03:42.789527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.442 [2024-11-26 19:03:42.789572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.442 [2024-11-26 19:03:42.792156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.442 [2024-11-26 19:03:42.792489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.442 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.703 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.703 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.703 "name": "Existed_Raid", 00:16:51.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.703 "strip_size_kb": 64, 00:16:51.703 "state": "configuring", 00:16:51.703 "raid_level": "raid5f", 00:16:51.703 "superblock": false, 00:16:51.703 "num_base_bdevs": 4, 00:16:51.703 "num_base_bdevs_discovered": 3, 00:16:51.703 "num_base_bdevs_operational": 4, 00:16:51.703 "base_bdevs_list": [ 00:16:51.703 { 00:16:51.703 "name": "BaseBdev1", 00:16:51.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.703 "is_configured": false, 00:16:51.703 "data_offset": 0, 00:16:51.703 "data_size": 0 00:16:51.703 }, 00:16:51.703 { 00:16:51.703 "name": "BaseBdev2", 00:16:51.703 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:51.703 "is_configured": true, 00:16:51.703 "data_offset": 0, 00:16:51.703 "data_size": 65536 00:16:51.703 }, 00:16:51.703 { 00:16:51.703 "name": "BaseBdev3", 00:16:51.703 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:51.703 "is_configured": true, 00:16:51.703 "data_offset": 0, 00:16:51.703 "data_size": 65536 00:16:51.703 }, 00:16:51.703 { 00:16:51.703 "name": "BaseBdev4", 00:16:51.703 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:51.703 "is_configured": true, 00:16:51.703 "data_offset": 0, 00:16:51.703 "data_size": 65536 00:16:51.703 } 00:16:51.703 ] 00:16:51.703 }' 00:16:51.703 19:03:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.703 19:03:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.962 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:51.962 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.962 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.962 [2024-11-26 19:03:43.325683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.221 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.222 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.222 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.222 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.222 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.222 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.222 "name": "Existed_Raid", 00:16:52.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.222 "strip_size_kb": 64, 00:16:52.222 "state": "configuring", 00:16:52.222 "raid_level": "raid5f", 00:16:52.222 "superblock": false, 00:16:52.222 "num_base_bdevs": 4, 00:16:52.222 "num_base_bdevs_discovered": 2, 00:16:52.222 "num_base_bdevs_operational": 4, 00:16:52.222 "base_bdevs_list": [ 00:16:52.222 { 00:16:52.222 "name": "BaseBdev1", 00:16:52.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.222 "is_configured": false, 00:16:52.222 "data_offset": 0, 00:16:52.222 "data_size": 0 00:16:52.222 }, 00:16:52.222 { 00:16:52.222 "name": null, 00:16:52.222 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:52.222 "is_configured": false, 00:16:52.222 "data_offset": 0, 00:16:52.222 "data_size": 65536 00:16:52.222 }, 00:16:52.222 { 00:16:52.222 "name": "BaseBdev3", 00:16:52.222 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:52.222 "is_configured": true, 00:16:52.222 "data_offset": 0, 00:16:52.222 "data_size": 65536 00:16:52.222 }, 00:16:52.222 { 00:16:52.222 "name": "BaseBdev4", 00:16:52.222 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:52.222 "is_configured": true, 00:16:52.222 "data_offset": 0, 00:16:52.222 "data_size": 65536 00:16:52.222 } 00:16:52.222 ] 00:16:52.222 }' 00:16:52.222 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.222 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.481 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.481 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:52.481 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.481 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.481 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.740 [2024-11-26 19:03:43.890864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.740 BaseBdev1 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.740 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.740 [ 00:16:52.740 { 00:16:52.740 "name": "BaseBdev1", 00:16:52.740 "aliases": [ 00:16:52.740 "19a8c254-96ee-4d12-8d78-28702a9efba1" 00:16:52.740 ], 00:16:52.740 "product_name": "Malloc disk", 00:16:52.740 "block_size": 512, 00:16:52.740 "num_blocks": 65536, 00:16:52.740 "uuid": "19a8c254-96ee-4d12-8d78-28702a9efba1", 00:16:52.740 "assigned_rate_limits": { 00:16:52.740 "rw_ios_per_sec": 0, 00:16:52.740 "rw_mbytes_per_sec": 0, 00:16:52.740 "r_mbytes_per_sec": 0, 00:16:52.740 "w_mbytes_per_sec": 0 00:16:52.740 }, 00:16:52.740 "claimed": true, 00:16:52.740 "claim_type": "exclusive_write", 00:16:52.740 "zoned": false, 00:16:52.740 "supported_io_types": { 00:16:52.740 "read": true, 00:16:52.740 "write": true, 00:16:52.740 "unmap": true, 00:16:52.740 "flush": true, 00:16:52.740 "reset": true, 00:16:52.740 "nvme_admin": false, 00:16:52.740 "nvme_io": false, 00:16:52.740 "nvme_io_md": false, 00:16:52.740 "write_zeroes": true, 00:16:52.740 "zcopy": true, 00:16:52.740 "get_zone_info": false, 00:16:52.740 "zone_management": false, 00:16:52.740 "zone_append": false, 00:16:52.740 "compare": false, 00:16:52.740 "compare_and_write": false, 00:16:52.740 "abort": true, 00:16:52.740 "seek_hole": false, 00:16:52.740 "seek_data": false, 00:16:52.740 "copy": true, 00:16:52.740 "nvme_iov_md": false 00:16:52.740 }, 00:16:52.740 "memory_domains": [ 00:16:52.740 { 00:16:52.740 "dma_device_id": "system", 00:16:52.740 "dma_device_type": 1 00:16:52.740 }, 00:16:52.740 { 00:16:52.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.740 "dma_device_type": 2 00:16:52.740 } 00:16:52.740 ], 00:16:52.740 "driver_specific": {} 00:16:52.740 } 00:16:52.740 ] 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.741 "name": "Existed_Raid", 00:16:52.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.741 "strip_size_kb": 64, 00:16:52.741 "state": "configuring", 00:16:52.741 "raid_level": "raid5f", 00:16:52.741 "superblock": false, 00:16:52.741 "num_base_bdevs": 4, 00:16:52.741 "num_base_bdevs_discovered": 3, 00:16:52.741 "num_base_bdevs_operational": 4, 00:16:52.741 "base_bdevs_list": [ 00:16:52.741 { 00:16:52.741 "name": "BaseBdev1", 00:16:52.741 "uuid": "19a8c254-96ee-4d12-8d78-28702a9efba1", 00:16:52.741 "is_configured": true, 00:16:52.741 "data_offset": 0, 00:16:52.741 "data_size": 65536 00:16:52.741 }, 00:16:52.741 { 00:16:52.741 "name": null, 00:16:52.741 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:52.741 "is_configured": false, 00:16:52.741 "data_offset": 0, 00:16:52.741 "data_size": 65536 00:16:52.741 }, 00:16:52.741 { 00:16:52.741 "name": "BaseBdev3", 00:16:52.741 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:52.741 "is_configured": true, 00:16:52.741 "data_offset": 0, 00:16:52.741 "data_size": 65536 00:16:52.741 }, 00:16:52.741 { 00:16:52.741 "name": "BaseBdev4", 00:16:52.741 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:52.741 "is_configured": true, 00:16:52.741 "data_offset": 0, 00:16:52.741 "data_size": 65536 00:16:52.741 } 00:16:52.741 ] 00:16:52.741 }' 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.741 19:03:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.310 [2024-11-26 19:03:44.495203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.310 "name": "Existed_Raid", 00:16:53.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.310 "strip_size_kb": 64, 00:16:53.310 "state": "configuring", 00:16:53.310 "raid_level": "raid5f", 00:16:53.310 "superblock": false, 00:16:53.310 "num_base_bdevs": 4, 00:16:53.310 "num_base_bdevs_discovered": 2, 00:16:53.310 "num_base_bdevs_operational": 4, 00:16:53.310 "base_bdevs_list": [ 00:16:53.310 { 00:16:53.310 "name": "BaseBdev1", 00:16:53.310 "uuid": "19a8c254-96ee-4d12-8d78-28702a9efba1", 00:16:53.310 "is_configured": true, 00:16:53.310 "data_offset": 0, 00:16:53.310 "data_size": 65536 00:16:53.310 }, 00:16:53.310 { 00:16:53.310 "name": null, 00:16:53.310 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:53.310 "is_configured": false, 00:16:53.310 "data_offset": 0, 00:16:53.310 "data_size": 65536 00:16:53.310 }, 00:16:53.310 { 00:16:53.310 "name": null, 00:16:53.310 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:53.310 "is_configured": false, 00:16:53.310 "data_offset": 0, 00:16:53.310 "data_size": 65536 00:16:53.310 }, 00:16:53.310 { 00:16:53.310 "name": "BaseBdev4", 00:16:53.310 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:53.310 "is_configured": true, 00:16:53.310 "data_offset": 0, 00:16:53.310 "data_size": 65536 00:16:53.310 } 00:16:53.310 ] 00:16:53.310 }' 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.310 19:03:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.879 [2024-11-26 19:03:45.063377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.879 "name": "Existed_Raid", 00:16:53.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.879 "strip_size_kb": 64, 00:16:53.879 "state": "configuring", 00:16:53.879 "raid_level": "raid5f", 00:16:53.879 "superblock": false, 00:16:53.879 "num_base_bdevs": 4, 00:16:53.879 "num_base_bdevs_discovered": 3, 00:16:53.879 "num_base_bdevs_operational": 4, 00:16:53.879 "base_bdevs_list": [ 00:16:53.879 { 00:16:53.879 "name": "BaseBdev1", 00:16:53.879 "uuid": "19a8c254-96ee-4d12-8d78-28702a9efba1", 00:16:53.879 "is_configured": true, 00:16:53.879 "data_offset": 0, 00:16:53.879 "data_size": 65536 00:16:53.879 }, 00:16:53.879 { 00:16:53.879 "name": null, 00:16:53.879 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:53.879 "is_configured": false, 00:16:53.879 "data_offset": 0, 00:16:53.879 "data_size": 65536 00:16:53.879 }, 00:16:53.879 { 00:16:53.879 "name": "BaseBdev3", 00:16:53.879 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:53.879 "is_configured": true, 00:16:53.879 "data_offset": 0, 00:16:53.879 "data_size": 65536 00:16:53.879 }, 00:16:53.879 { 00:16:53.879 "name": "BaseBdev4", 00:16:53.879 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:53.879 "is_configured": true, 00:16:53.879 "data_offset": 0, 00:16:53.879 "data_size": 65536 00:16:53.879 } 00:16:53.879 ] 00:16:53.879 }' 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.879 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.447 [2024-11-26 19:03:45.655652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.447 "name": "Existed_Raid", 00:16:54.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.447 "strip_size_kb": 64, 00:16:54.447 "state": "configuring", 00:16:54.447 "raid_level": "raid5f", 00:16:54.447 "superblock": false, 00:16:54.447 "num_base_bdevs": 4, 00:16:54.447 "num_base_bdevs_discovered": 2, 00:16:54.447 "num_base_bdevs_operational": 4, 00:16:54.447 "base_bdevs_list": [ 00:16:54.447 { 00:16:54.447 "name": null, 00:16:54.447 "uuid": "19a8c254-96ee-4d12-8d78-28702a9efba1", 00:16:54.447 "is_configured": false, 00:16:54.447 "data_offset": 0, 00:16:54.447 "data_size": 65536 00:16:54.447 }, 00:16:54.447 { 00:16:54.447 "name": null, 00:16:54.447 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:54.447 "is_configured": false, 00:16:54.447 "data_offset": 0, 00:16:54.447 "data_size": 65536 00:16:54.447 }, 00:16:54.447 { 00:16:54.447 "name": "BaseBdev3", 00:16:54.447 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:54.447 "is_configured": true, 00:16:54.447 "data_offset": 0, 00:16:54.447 "data_size": 65536 00:16:54.447 }, 00:16:54.447 { 00:16:54.447 "name": "BaseBdev4", 00:16:54.447 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:54.447 "is_configured": true, 00:16:54.447 "data_offset": 0, 00:16:54.447 "data_size": 65536 00:16:54.447 } 00:16:54.447 ] 00:16:54.447 }' 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.447 19:03:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.015 [2024-11-26 19:03:46.306468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.015 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.016 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.016 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.016 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.016 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.016 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.016 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.016 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.016 "name": "Existed_Raid", 00:16:55.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.016 "strip_size_kb": 64, 00:16:55.016 "state": "configuring", 00:16:55.016 "raid_level": "raid5f", 00:16:55.016 "superblock": false, 00:16:55.016 "num_base_bdevs": 4, 00:16:55.016 "num_base_bdevs_discovered": 3, 00:16:55.016 "num_base_bdevs_operational": 4, 00:16:55.016 "base_bdevs_list": [ 00:16:55.016 { 00:16:55.016 "name": null, 00:16:55.016 "uuid": "19a8c254-96ee-4d12-8d78-28702a9efba1", 00:16:55.016 "is_configured": false, 00:16:55.016 "data_offset": 0, 00:16:55.016 "data_size": 65536 00:16:55.016 }, 00:16:55.016 { 00:16:55.016 "name": "BaseBdev2", 00:16:55.016 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:55.016 "is_configured": true, 00:16:55.016 "data_offset": 0, 00:16:55.016 "data_size": 65536 00:16:55.016 }, 00:16:55.016 { 00:16:55.016 "name": "BaseBdev3", 00:16:55.016 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:55.016 "is_configured": true, 00:16:55.016 "data_offset": 0, 00:16:55.016 "data_size": 65536 00:16:55.016 }, 00:16:55.016 { 00:16:55.016 "name": "BaseBdev4", 00:16:55.016 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:55.016 "is_configured": true, 00:16:55.016 "data_offset": 0, 00:16:55.016 "data_size": 65536 00:16:55.016 } 00:16:55.016 ] 00:16:55.016 }' 00:16:55.016 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.016 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 19a8c254-96ee-4d12-8d78-28702a9efba1 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.585 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.844 [2024-11-26 19:03:46.977251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:55.844 [2024-11-26 19:03:46.977328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:55.844 [2024-11-26 19:03:46.977341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:55.844 [2024-11-26 19:03:46.977656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:55.844 [2024-11-26 19:03:46.983864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:55.844 [2024-11-26 19:03:46.983896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:55.844 [2024-11-26 19:03:46.984257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.844 NewBaseBdev 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.844 19:03:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.844 [ 00:16:55.844 { 00:16:55.844 "name": "NewBaseBdev", 00:16:55.844 "aliases": [ 00:16:55.844 "19a8c254-96ee-4d12-8d78-28702a9efba1" 00:16:55.844 ], 00:16:55.844 "product_name": "Malloc disk", 00:16:55.844 "block_size": 512, 00:16:55.844 "num_blocks": 65536, 00:16:55.844 "uuid": "19a8c254-96ee-4d12-8d78-28702a9efba1", 00:16:55.844 "assigned_rate_limits": { 00:16:55.844 "rw_ios_per_sec": 0, 00:16:55.844 "rw_mbytes_per_sec": 0, 00:16:55.844 "r_mbytes_per_sec": 0, 00:16:55.844 "w_mbytes_per_sec": 0 00:16:55.844 }, 00:16:55.844 "claimed": true, 00:16:55.844 "claim_type": "exclusive_write", 00:16:55.844 "zoned": false, 00:16:55.844 "supported_io_types": { 00:16:55.844 "read": true, 00:16:55.844 "write": true, 00:16:55.844 "unmap": true, 00:16:55.844 "flush": true, 00:16:55.844 "reset": true, 00:16:55.844 "nvme_admin": false, 00:16:55.844 "nvme_io": false, 00:16:55.844 "nvme_io_md": false, 00:16:55.844 "write_zeroes": true, 00:16:55.844 "zcopy": true, 00:16:55.844 "get_zone_info": false, 00:16:55.844 "zone_management": false, 00:16:55.844 "zone_append": false, 00:16:55.844 "compare": false, 00:16:55.844 "compare_and_write": false, 00:16:55.844 "abort": true, 00:16:55.844 "seek_hole": false, 00:16:55.844 "seek_data": false, 00:16:55.844 "copy": true, 00:16:55.844 "nvme_iov_md": false 00:16:55.844 }, 00:16:55.844 "memory_domains": [ 00:16:55.844 { 00:16:55.844 "dma_device_id": "system", 00:16:55.844 "dma_device_type": 1 00:16:55.844 }, 00:16:55.844 { 00:16:55.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.844 "dma_device_type": 2 00:16:55.844 } 00:16:55.844 ], 00:16:55.844 "driver_specific": {} 00:16:55.844 } 00:16:55.844 ] 00:16:55.844 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.844 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:55.844 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:55.844 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.844 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.844 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.844 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.844 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.844 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.845 "name": "Existed_Raid", 00:16:55.845 "uuid": "6bc02a60-38d8-405c-b9b3-5fdc2a2c9ecb", 00:16:55.845 "strip_size_kb": 64, 00:16:55.845 "state": "online", 00:16:55.845 "raid_level": "raid5f", 00:16:55.845 "superblock": false, 00:16:55.845 "num_base_bdevs": 4, 00:16:55.845 "num_base_bdevs_discovered": 4, 00:16:55.845 "num_base_bdevs_operational": 4, 00:16:55.845 "base_bdevs_list": [ 00:16:55.845 { 00:16:55.845 "name": "NewBaseBdev", 00:16:55.845 "uuid": "19a8c254-96ee-4d12-8d78-28702a9efba1", 00:16:55.845 "is_configured": true, 00:16:55.845 "data_offset": 0, 00:16:55.845 "data_size": 65536 00:16:55.845 }, 00:16:55.845 { 00:16:55.845 "name": "BaseBdev2", 00:16:55.845 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:55.845 "is_configured": true, 00:16:55.845 "data_offset": 0, 00:16:55.845 "data_size": 65536 00:16:55.845 }, 00:16:55.845 { 00:16:55.845 "name": "BaseBdev3", 00:16:55.845 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:55.845 "is_configured": true, 00:16:55.845 "data_offset": 0, 00:16:55.845 "data_size": 65536 00:16:55.845 }, 00:16:55.845 { 00:16:55.845 "name": "BaseBdev4", 00:16:55.845 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:55.845 "is_configured": true, 00:16:55.845 "data_offset": 0, 00:16:55.845 "data_size": 65536 00:16:55.845 } 00:16:55.845 ] 00:16:55.845 }' 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.845 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.423 [2024-11-26 19:03:47.556099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.423 "name": "Existed_Raid", 00:16:56.423 "aliases": [ 00:16:56.423 "6bc02a60-38d8-405c-b9b3-5fdc2a2c9ecb" 00:16:56.423 ], 00:16:56.423 "product_name": "Raid Volume", 00:16:56.423 "block_size": 512, 00:16:56.423 "num_blocks": 196608, 00:16:56.423 "uuid": "6bc02a60-38d8-405c-b9b3-5fdc2a2c9ecb", 00:16:56.423 "assigned_rate_limits": { 00:16:56.423 "rw_ios_per_sec": 0, 00:16:56.423 "rw_mbytes_per_sec": 0, 00:16:56.423 "r_mbytes_per_sec": 0, 00:16:56.423 "w_mbytes_per_sec": 0 00:16:56.423 }, 00:16:56.423 "claimed": false, 00:16:56.423 "zoned": false, 00:16:56.423 "supported_io_types": { 00:16:56.423 "read": true, 00:16:56.423 "write": true, 00:16:56.423 "unmap": false, 00:16:56.423 "flush": false, 00:16:56.423 "reset": true, 00:16:56.423 "nvme_admin": false, 00:16:56.423 "nvme_io": false, 00:16:56.423 "nvme_io_md": false, 00:16:56.423 "write_zeroes": true, 00:16:56.423 "zcopy": false, 00:16:56.423 "get_zone_info": false, 00:16:56.423 "zone_management": false, 00:16:56.423 "zone_append": false, 00:16:56.423 "compare": false, 00:16:56.423 "compare_and_write": false, 00:16:56.423 "abort": false, 00:16:56.423 "seek_hole": false, 00:16:56.423 "seek_data": false, 00:16:56.423 "copy": false, 00:16:56.423 "nvme_iov_md": false 00:16:56.423 }, 00:16:56.423 "driver_specific": { 00:16:56.423 "raid": { 00:16:56.423 "uuid": "6bc02a60-38d8-405c-b9b3-5fdc2a2c9ecb", 00:16:56.423 "strip_size_kb": 64, 00:16:56.423 "state": "online", 00:16:56.423 "raid_level": "raid5f", 00:16:56.423 "superblock": false, 00:16:56.423 "num_base_bdevs": 4, 00:16:56.423 "num_base_bdevs_discovered": 4, 00:16:56.423 "num_base_bdevs_operational": 4, 00:16:56.423 "base_bdevs_list": [ 00:16:56.423 { 00:16:56.423 "name": "NewBaseBdev", 00:16:56.423 "uuid": "19a8c254-96ee-4d12-8d78-28702a9efba1", 00:16:56.423 "is_configured": true, 00:16:56.423 "data_offset": 0, 00:16:56.423 "data_size": 65536 00:16:56.423 }, 00:16:56.423 { 00:16:56.423 "name": "BaseBdev2", 00:16:56.423 "uuid": "78cb9928-9fcd-420a-9629-42750aa70431", 00:16:56.423 "is_configured": true, 00:16:56.423 "data_offset": 0, 00:16:56.423 "data_size": 65536 00:16:56.423 }, 00:16:56.423 { 00:16:56.423 "name": "BaseBdev3", 00:16:56.423 "uuid": "294bc6c4-811e-4bc9-a92f-e716578fb5c8", 00:16:56.423 "is_configured": true, 00:16:56.423 "data_offset": 0, 00:16:56.423 "data_size": 65536 00:16:56.423 }, 00:16:56.423 { 00:16:56.423 "name": "BaseBdev4", 00:16:56.423 "uuid": "95de3f54-d4a0-4758-809f-e8b08b41d237", 00:16:56.423 "is_configured": true, 00:16:56.423 "data_offset": 0, 00:16:56.423 "data_size": 65536 00:16:56.423 } 00:16:56.423 ] 00:16:56.423 } 00:16:56.423 } 00:16:56.423 }' 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:56.423 BaseBdev2 00:16:56.423 BaseBdev3 00:16:56.423 BaseBdev4' 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.423 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.718 [2024-11-26 19:03:47.931923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.718 [2024-11-26 19:03:47.931962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.718 [2024-11-26 19:03:47.932082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.718 [2024-11-26 19:03:47.932569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.718 [2024-11-26 19:03:47.932593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83212 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83212 ']' 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83212 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83212 00:16:56.718 killing process with pid 83212 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83212' 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83212 00:16:56.718 [2024-11-26 19:03:47.972946] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.718 19:03:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83212 00:16:57.286 [2024-11-26 19:03:48.395913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.225 19:03:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:58.225 00:16:58.225 real 0m12.977s 00:16:58.225 user 0m21.369s 00:16:58.225 sys 0m1.839s 00:16:58.225 19:03:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.225 19:03:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.225 ************************************ 00:16:58.225 END TEST raid5f_state_function_test 00:16:58.225 ************************************ 00:16:58.225 19:03:49 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:58.225 19:03:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:58.225 19:03:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.225 19:03:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.225 ************************************ 00:16:58.225 START TEST raid5f_state_function_test_sb 00:16:58.225 ************************************ 00:16:58.225 19:03:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:58.225 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:58.225 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:58.496 Process raid pid: 83894 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83894 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83894' 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83894 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83894 ']' 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.496 19:03:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.496 [2024-11-26 19:03:49.712023] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:16:58.496 [2024-11-26 19:03:49.712392] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.753 [2024-11-26 19:03:49.904516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.753 [2024-11-26 19:03:50.067971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.038 [2024-11-26 19:03:50.298241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.038 [2024-11-26 19:03:50.298533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.611 19:03:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.611 19:03:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:59.611 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.612 [2024-11-26 19:03:50.723338] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:59.612 [2024-11-26 19:03:50.723421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:59.612 [2024-11-26 19:03:50.723439] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:59.612 [2024-11-26 19:03:50.723455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:59.612 [2024-11-26 19:03:50.723470] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:59.612 [2024-11-26 19:03:50.723483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:59.612 [2024-11-26 19:03:50.723492] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:59.612 [2024-11-26 19:03:50.723505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.612 "name": "Existed_Raid", 00:16:59.612 "uuid": "d863d545-64a4-4922-b5ae-23637455fbaa", 00:16:59.612 "strip_size_kb": 64, 00:16:59.612 "state": "configuring", 00:16:59.612 "raid_level": "raid5f", 00:16:59.612 "superblock": true, 00:16:59.612 "num_base_bdevs": 4, 00:16:59.612 "num_base_bdevs_discovered": 0, 00:16:59.612 "num_base_bdevs_operational": 4, 00:16:59.612 "base_bdevs_list": [ 00:16:59.612 { 00:16:59.612 "name": "BaseBdev1", 00:16:59.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.612 "is_configured": false, 00:16:59.612 "data_offset": 0, 00:16:59.612 "data_size": 0 00:16:59.612 }, 00:16:59.612 { 00:16:59.612 "name": "BaseBdev2", 00:16:59.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.612 "is_configured": false, 00:16:59.612 "data_offset": 0, 00:16:59.612 "data_size": 0 00:16:59.612 }, 00:16:59.612 { 00:16:59.612 "name": "BaseBdev3", 00:16:59.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.612 "is_configured": false, 00:16:59.612 "data_offset": 0, 00:16:59.612 "data_size": 0 00:16:59.612 }, 00:16:59.612 { 00:16:59.612 "name": "BaseBdev4", 00:16:59.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.612 "is_configured": false, 00:16:59.612 "data_offset": 0, 00:16:59.612 "data_size": 0 00:16:59.612 } 00:16:59.612 ] 00:16:59.612 }' 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.612 19:03:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.180 [2024-11-26 19:03:51.255385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.180 [2024-11-26 19:03:51.255435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.180 [2024-11-26 19:03:51.263420] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.180 [2024-11-26 19:03:51.263504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.180 [2024-11-26 19:03:51.263521] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.180 [2024-11-26 19:03:51.263537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.180 [2024-11-26 19:03:51.263547] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.180 [2024-11-26 19:03:51.263567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.180 [2024-11-26 19:03:51.263576] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:00.180 [2024-11-26 19:03:51.263590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.180 [2024-11-26 19:03:51.311212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.180 BaseBdev1 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.180 [ 00:17:00.180 { 00:17:00.180 "name": "BaseBdev1", 00:17:00.180 "aliases": [ 00:17:00.180 "2a7f7265-4962-4ab9-a73a-a91d7cdd58ac" 00:17:00.180 ], 00:17:00.180 "product_name": "Malloc disk", 00:17:00.180 "block_size": 512, 00:17:00.180 "num_blocks": 65536, 00:17:00.180 "uuid": "2a7f7265-4962-4ab9-a73a-a91d7cdd58ac", 00:17:00.180 "assigned_rate_limits": { 00:17:00.180 "rw_ios_per_sec": 0, 00:17:00.180 "rw_mbytes_per_sec": 0, 00:17:00.180 "r_mbytes_per_sec": 0, 00:17:00.180 "w_mbytes_per_sec": 0 00:17:00.180 }, 00:17:00.180 "claimed": true, 00:17:00.180 "claim_type": "exclusive_write", 00:17:00.180 "zoned": false, 00:17:00.180 "supported_io_types": { 00:17:00.180 "read": true, 00:17:00.180 "write": true, 00:17:00.180 "unmap": true, 00:17:00.180 "flush": true, 00:17:00.180 "reset": true, 00:17:00.180 "nvme_admin": false, 00:17:00.180 "nvme_io": false, 00:17:00.180 "nvme_io_md": false, 00:17:00.180 "write_zeroes": true, 00:17:00.180 "zcopy": true, 00:17:00.180 "get_zone_info": false, 00:17:00.180 "zone_management": false, 00:17:00.180 "zone_append": false, 00:17:00.180 "compare": false, 00:17:00.180 "compare_and_write": false, 00:17:00.180 "abort": true, 00:17:00.180 "seek_hole": false, 00:17:00.180 "seek_data": false, 00:17:00.180 "copy": true, 00:17:00.180 "nvme_iov_md": false 00:17:00.180 }, 00:17:00.180 "memory_domains": [ 00:17:00.180 { 00:17:00.180 "dma_device_id": "system", 00:17:00.180 "dma_device_type": 1 00:17:00.180 }, 00:17:00.180 { 00:17:00.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.180 "dma_device_type": 2 00:17:00.180 } 00:17:00.180 ], 00:17:00.180 "driver_specific": {} 00:17:00.180 } 00:17:00.180 ] 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.180 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.180 "name": "Existed_Raid", 00:17:00.180 "uuid": "6f45186b-4d2e-4503-b382-d3f28a132b1e", 00:17:00.180 "strip_size_kb": 64, 00:17:00.180 "state": "configuring", 00:17:00.180 "raid_level": "raid5f", 00:17:00.180 "superblock": true, 00:17:00.180 "num_base_bdevs": 4, 00:17:00.180 "num_base_bdevs_discovered": 1, 00:17:00.180 "num_base_bdevs_operational": 4, 00:17:00.180 "base_bdevs_list": [ 00:17:00.180 { 00:17:00.180 "name": "BaseBdev1", 00:17:00.180 "uuid": "2a7f7265-4962-4ab9-a73a-a91d7cdd58ac", 00:17:00.180 "is_configured": true, 00:17:00.180 "data_offset": 2048, 00:17:00.180 "data_size": 63488 00:17:00.180 }, 00:17:00.180 { 00:17:00.180 "name": "BaseBdev2", 00:17:00.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.180 "is_configured": false, 00:17:00.180 "data_offset": 0, 00:17:00.180 "data_size": 0 00:17:00.180 }, 00:17:00.180 { 00:17:00.180 "name": "BaseBdev3", 00:17:00.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.180 "is_configured": false, 00:17:00.180 "data_offset": 0, 00:17:00.180 "data_size": 0 00:17:00.180 }, 00:17:00.180 { 00:17:00.180 "name": "BaseBdev4", 00:17:00.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.180 "is_configured": false, 00:17:00.180 "data_offset": 0, 00:17:00.180 "data_size": 0 00:17:00.180 } 00:17:00.180 ] 00:17:00.180 }' 00:17:00.181 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.181 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.747 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.748 [2024-11-26 19:03:51.847424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.748 [2024-11-26 19:03:51.847506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.748 [2024-11-26 19:03:51.855507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.748 [2024-11-26 19:03:51.858225] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.748 [2024-11-26 19:03:51.858329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.748 [2024-11-26 19:03:51.858363] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.748 [2024-11-26 19:03:51.858381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.748 [2024-11-26 19:03:51.858391] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:00.748 [2024-11-26 19:03:51.858404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.748 "name": "Existed_Raid", 00:17:00.748 "uuid": "17028276-ddfb-41fd-82cc-fe998b56525d", 00:17:00.748 "strip_size_kb": 64, 00:17:00.748 "state": "configuring", 00:17:00.748 "raid_level": "raid5f", 00:17:00.748 "superblock": true, 00:17:00.748 "num_base_bdevs": 4, 00:17:00.748 "num_base_bdevs_discovered": 1, 00:17:00.748 "num_base_bdevs_operational": 4, 00:17:00.748 "base_bdevs_list": [ 00:17:00.748 { 00:17:00.748 "name": "BaseBdev1", 00:17:00.748 "uuid": "2a7f7265-4962-4ab9-a73a-a91d7cdd58ac", 00:17:00.748 "is_configured": true, 00:17:00.748 "data_offset": 2048, 00:17:00.748 "data_size": 63488 00:17:00.748 }, 00:17:00.748 { 00:17:00.748 "name": "BaseBdev2", 00:17:00.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.748 "is_configured": false, 00:17:00.748 "data_offset": 0, 00:17:00.748 "data_size": 0 00:17:00.748 }, 00:17:00.748 { 00:17:00.748 "name": "BaseBdev3", 00:17:00.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.748 "is_configured": false, 00:17:00.748 "data_offset": 0, 00:17:00.748 "data_size": 0 00:17:00.748 }, 00:17:00.748 { 00:17:00.748 "name": "BaseBdev4", 00:17:00.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.748 "is_configured": false, 00:17:00.748 "data_offset": 0, 00:17:00.748 "data_size": 0 00:17:00.748 } 00:17:00.748 ] 00:17:00.748 }' 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.748 19:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.006 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:01.006 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.006 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.264 [2024-11-26 19:03:52.403674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.264 BaseBdev2 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.264 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.264 [ 00:17:01.264 { 00:17:01.264 "name": "BaseBdev2", 00:17:01.264 "aliases": [ 00:17:01.264 "2b9ea30a-1795-4997-9d4e-3c5981287cfc" 00:17:01.264 ], 00:17:01.264 "product_name": "Malloc disk", 00:17:01.264 "block_size": 512, 00:17:01.264 "num_blocks": 65536, 00:17:01.264 "uuid": "2b9ea30a-1795-4997-9d4e-3c5981287cfc", 00:17:01.264 "assigned_rate_limits": { 00:17:01.264 "rw_ios_per_sec": 0, 00:17:01.264 "rw_mbytes_per_sec": 0, 00:17:01.264 "r_mbytes_per_sec": 0, 00:17:01.264 "w_mbytes_per_sec": 0 00:17:01.264 }, 00:17:01.264 "claimed": true, 00:17:01.264 "claim_type": "exclusive_write", 00:17:01.264 "zoned": false, 00:17:01.265 "supported_io_types": { 00:17:01.265 "read": true, 00:17:01.265 "write": true, 00:17:01.265 "unmap": true, 00:17:01.265 "flush": true, 00:17:01.265 "reset": true, 00:17:01.265 "nvme_admin": false, 00:17:01.265 "nvme_io": false, 00:17:01.265 "nvme_io_md": false, 00:17:01.265 "write_zeroes": true, 00:17:01.265 "zcopy": true, 00:17:01.265 "get_zone_info": false, 00:17:01.265 "zone_management": false, 00:17:01.265 "zone_append": false, 00:17:01.265 "compare": false, 00:17:01.265 "compare_and_write": false, 00:17:01.265 "abort": true, 00:17:01.265 "seek_hole": false, 00:17:01.265 "seek_data": false, 00:17:01.265 "copy": true, 00:17:01.265 "nvme_iov_md": false 00:17:01.265 }, 00:17:01.265 "memory_domains": [ 00:17:01.265 { 00:17:01.265 "dma_device_id": "system", 00:17:01.265 "dma_device_type": 1 00:17:01.265 }, 00:17:01.265 { 00:17:01.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.265 "dma_device_type": 2 00:17:01.265 } 00:17:01.265 ], 00:17:01.265 "driver_specific": {} 00:17:01.265 } 00:17:01.265 ] 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.265 "name": "Existed_Raid", 00:17:01.265 "uuid": "17028276-ddfb-41fd-82cc-fe998b56525d", 00:17:01.265 "strip_size_kb": 64, 00:17:01.265 "state": "configuring", 00:17:01.265 "raid_level": "raid5f", 00:17:01.265 "superblock": true, 00:17:01.265 "num_base_bdevs": 4, 00:17:01.265 "num_base_bdevs_discovered": 2, 00:17:01.265 "num_base_bdevs_operational": 4, 00:17:01.265 "base_bdevs_list": [ 00:17:01.265 { 00:17:01.265 "name": "BaseBdev1", 00:17:01.265 "uuid": "2a7f7265-4962-4ab9-a73a-a91d7cdd58ac", 00:17:01.265 "is_configured": true, 00:17:01.265 "data_offset": 2048, 00:17:01.265 "data_size": 63488 00:17:01.265 }, 00:17:01.265 { 00:17:01.265 "name": "BaseBdev2", 00:17:01.265 "uuid": "2b9ea30a-1795-4997-9d4e-3c5981287cfc", 00:17:01.265 "is_configured": true, 00:17:01.265 "data_offset": 2048, 00:17:01.265 "data_size": 63488 00:17:01.265 }, 00:17:01.265 { 00:17:01.265 "name": "BaseBdev3", 00:17:01.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.265 "is_configured": false, 00:17:01.265 "data_offset": 0, 00:17:01.265 "data_size": 0 00:17:01.265 }, 00:17:01.265 { 00:17:01.265 "name": "BaseBdev4", 00:17:01.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.265 "is_configured": false, 00:17:01.265 "data_offset": 0, 00:17:01.265 "data_size": 0 00:17:01.265 } 00:17:01.265 ] 00:17:01.265 }' 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.265 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.832 19:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:01.832 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.832 19:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.832 [2024-11-26 19:03:53.034605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.832 BaseBdev3 00:17:01.832 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.832 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:01.832 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:01.832 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.832 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.833 [ 00:17:01.833 { 00:17:01.833 "name": "BaseBdev3", 00:17:01.833 "aliases": [ 00:17:01.833 "2051f2d0-e069-41a4-8614-65b68c685d9e" 00:17:01.833 ], 00:17:01.833 "product_name": "Malloc disk", 00:17:01.833 "block_size": 512, 00:17:01.833 "num_blocks": 65536, 00:17:01.833 "uuid": "2051f2d0-e069-41a4-8614-65b68c685d9e", 00:17:01.833 "assigned_rate_limits": { 00:17:01.833 "rw_ios_per_sec": 0, 00:17:01.833 "rw_mbytes_per_sec": 0, 00:17:01.833 "r_mbytes_per_sec": 0, 00:17:01.833 "w_mbytes_per_sec": 0 00:17:01.833 }, 00:17:01.833 "claimed": true, 00:17:01.833 "claim_type": "exclusive_write", 00:17:01.833 "zoned": false, 00:17:01.833 "supported_io_types": { 00:17:01.833 "read": true, 00:17:01.833 "write": true, 00:17:01.833 "unmap": true, 00:17:01.833 "flush": true, 00:17:01.833 "reset": true, 00:17:01.833 "nvme_admin": false, 00:17:01.833 "nvme_io": false, 00:17:01.833 "nvme_io_md": false, 00:17:01.833 "write_zeroes": true, 00:17:01.833 "zcopy": true, 00:17:01.833 "get_zone_info": false, 00:17:01.833 "zone_management": false, 00:17:01.833 "zone_append": false, 00:17:01.833 "compare": false, 00:17:01.833 "compare_and_write": false, 00:17:01.833 "abort": true, 00:17:01.833 "seek_hole": false, 00:17:01.833 "seek_data": false, 00:17:01.833 "copy": true, 00:17:01.833 "nvme_iov_md": false 00:17:01.833 }, 00:17:01.833 "memory_domains": [ 00:17:01.833 { 00:17:01.833 "dma_device_id": "system", 00:17:01.833 "dma_device_type": 1 00:17:01.833 }, 00:17:01.833 { 00:17:01.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.833 "dma_device_type": 2 00:17:01.833 } 00:17:01.833 ], 00:17:01.833 "driver_specific": {} 00:17:01.833 } 00:17:01.833 ] 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.833 "name": "Existed_Raid", 00:17:01.833 "uuid": "17028276-ddfb-41fd-82cc-fe998b56525d", 00:17:01.833 "strip_size_kb": 64, 00:17:01.833 "state": "configuring", 00:17:01.833 "raid_level": "raid5f", 00:17:01.833 "superblock": true, 00:17:01.833 "num_base_bdevs": 4, 00:17:01.833 "num_base_bdevs_discovered": 3, 00:17:01.833 "num_base_bdevs_operational": 4, 00:17:01.833 "base_bdevs_list": [ 00:17:01.833 { 00:17:01.833 "name": "BaseBdev1", 00:17:01.833 "uuid": "2a7f7265-4962-4ab9-a73a-a91d7cdd58ac", 00:17:01.833 "is_configured": true, 00:17:01.833 "data_offset": 2048, 00:17:01.833 "data_size": 63488 00:17:01.833 }, 00:17:01.833 { 00:17:01.833 "name": "BaseBdev2", 00:17:01.833 "uuid": "2b9ea30a-1795-4997-9d4e-3c5981287cfc", 00:17:01.833 "is_configured": true, 00:17:01.833 "data_offset": 2048, 00:17:01.833 "data_size": 63488 00:17:01.833 }, 00:17:01.833 { 00:17:01.833 "name": "BaseBdev3", 00:17:01.833 "uuid": "2051f2d0-e069-41a4-8614-65b68c685d9e", 00:17:01.833 "is_configured": true, 00:17:01.833 "data_offset": 2048, 00:17:01.833 "data_size": 63488 00:17:01.833 }, 00:17:01.833 { 00:17:01.833 "name": "BaseBdev4", 00:17:01.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.833 "is_configured": false, 00:17:01.833 "data_offset": 0, 00:17:01.833 "data_size": 0 00:17:01.833 } 00:17:01.833 ] 00:17:01.833 }' 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.833 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.402 [2024-11-26 19:03:53.656813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:02.402 [2024-11-26 19:03:53.657247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:02.402 [2024-11-26 19:03:53.657277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:02.402 [2024-11-26 19:03:53.657630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:02.402 BaseBdev4 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.402 [2024-11-26 19:03:53.665116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:02.402 [2024-11-26 19:03:53.665151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:02.402 [2024-11-26 19:03:53.665548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.402 [ 00:17:02.402 { 00:17:02.402 "name": "BaseBdev4", 00:17:02.402 "aliases": [ 00:17:02.402 "3fdf8b23-96e4-4661-b218-2d55cda88222" 00:17:02.402 ], 00:17:02.402 "product_name": "Malloc disk", 00:17:02.402 "block_size": 512, 00:17:02.402 "num_blocks": 65536, 00:17:02.402 "uuid": "3fdf8b23-96e4-4661-b218-2d55cda88222", 00:17:02.402 "assigned_rate_limits": { 00:17:02.402 "rw_ios_per_sec": 0, 00:17:02.402 "rw_mbytes_per_sec": 0, 00:17:02.402 "r_mbytes_per_sec": 0, 00:17:02.402 "w_mbytes_per_sec": 0 00:17:02.402 }, 00:17:02.402 "claimed": true, 00:17:02.402 "claim_type": "exclusive_write", 00:17:02.402 "zoned": false, 00:17:02.402 "supported_io_types": { 00:17:02.402 "read": true, 00:17:02.402 "write": true, 00:17:02.402 "unmap": true, 00:17:02.402 "flush": true, 00:17:02.402 "reset": true, 00:17:02.402 "nvme_admin": false, 00:17:02.402 "nvme_io": false, 00:17:02.402 "nvme_io_md": false, 00:17:02.402 "write_zeroes": true, 00:17:02.402 "zcopy": true, 00:17:02.402 "get_zone_info": false, 00:17:02.402 "zone_management": false, 00:17:02.402 "zone_append": false, 00:17:02.402 "compare": false, 00:17:02.402 "compare_and_write": false, 00:17:02.402 "abort": true, 00:17:02.402 "seek_hole": false, 00:17:02.402 "seek_data": false, 00:17:02.402 "copy": true, 00:17:02.402 "nvme_iov_md": false 00:17:02.402 }, 00:17:02.402 "memory_domains": [ 00:17:02.402 { 00:17:02.402 "dma_device_id": "system", 00:17:02.402 "dma_device_type": 1 00:17:02.402 }, 00:17:02.402 { 00:17:02.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.402 "dma_device_type": 2 00:17:02.402 } 00:17:02.402 ], 00:17:02.402 "driver_specific": {} 00:17:02.402 } 00:17:02.402 ] 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.402 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.403 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.403 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.403 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.403 "name": "Existed_Raid", 00:17:02.403 "uuid": "17028276-ddfb-41fd-82cc-fe998b56525d", 00:17:02.403 "strip_size_kb": 64, 00:17:02.403 "state": "online", 00:17:02.403 "raid_level": "raid5f", 00:17:02.403 "superblock": true, 00:17:02.403 "num_base_bdevs": 4, 00:17:02.403 "num_base_bdevs_discovered": 4, 00:17:02.403 "num_base_bdevs_operational": 4, 00:17:02.403 "base_bdevs_list": [ 00:17:02.403 { 00:17:02.403 "name": "BaseBdev1", 00:17:02.403 "uuid": "2a7f7265-4962-4ab9-a73a-a91d7cdd58ac", 00:17:02.403 "is_configured": true, 00:17:02.403 "data_offset": 2048, 00:17:02.403 "data_size": 63488 00:17:02.403 }, 00:17:02.403 { 00:17:02.403 "name": "BaseBdev2", 00:17:02.403 "uuid": "2b9ea30a-1795-4997-9d4e-3c5981287cfc", 00:17:02.403 "is_configured": true, 00:17:02.403 "data_offset": 2048, 00:17:02.403 "data_size": 63488 00:17:02.403 }, 00:17:02.403 { 00:17:02.403 "name": "BaseBdev3", 00:17:02.403 "uuid": "2051f2d0-e069-41a4-8614-65b68c685d9e", 00:17:02.403 "is_configured": true, 00:17:02.403 "data_offset": 2048, 00:17:02.403 "data_size": 63488 00:17:02.403 }, 00:17:02.403 { 00:17:02.403 "name": "BaseBdev4", 00:17:02.403 "uuid": "3fdf8b23-96e4-4661-b218-2d55cda88222", 00:17:02.403 "is_configured": true, 00:17:02.403 "data_offset": 2048, 00:17:02.403 "data_size": 63488 00:17:02.403 } 00:17:02.403 ] 00:17:02.403 }' 00:17:02.403 19:03:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.403 19:03:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.971 [2024-11-26 19:03:54.238244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.971 "name": "Existed_Raid", 00:17:02.971 "aliases": [ 00:17:02.971 "17028276-ddfb-41fd-82cc-fe998b56525d" 00:17:02.971 ], 00:17:02.971 "product_name": "Raid Volume", 00:17:02.971 "block_size": 512, 00:17:02.971 "num_blocks": 190464, 00:17:02.971 "uuid": "17028276-ddfb-41fd-82cc-fe998b56525d", 00:17:02.971 "assigned_rate_limits": { 00:17:02.971 "rw_ios_per_sec": 0, 00:17:02.971 "rw_mbytes_per_sec": 0, 00:17:02.971 "r_mbytes_per_sec": 0, 00:17:02.971 "w_mbytes_per_sec": 0 00:17:02.971 }, 00:17:02.971 "claimed": false, 00:17:02.971 "zoned": false, 00:17:02.971 "supported_io_types": { 00:17:02.971 "read": true, 00:17:02.971 "write": true, 00:17:02.971 "unmap": false, 00:17:02.971 "flush": false, 00:17:02.971 "reset": true, 00:17:02.971 "nvme_admin": false, 00:17:02.971 "nvme_io": false, 00:17:02.971 "nvme_io_md": false, 00:17:02.971 "write_zeroes": true, 00:17:02.971 "zcopy": false, 00:17:02.971 "get_zone_info": false, 00:17:02.971 "zone_management": false, 00:17:02.971 "zone_append": false, 00:17:02.971 "compare": false, 00:17:02.971 "compare_and_write": false, 00:17:02.971 "abort": false, 00:17:02.971 "seek_hole": false, 00:17:02.971 "seek_data": false, 00:17:02.971 "copy": false, 00:17:02.971 "nvme_iov_md": false 00:17:02.971 }, 00:17:02.971 "driver_specific": { 00:17:02.971 "raid": { 00:17:02.971 "uuid": "17028276-ddfb-41fd-82cc-fe998b56525d", 00:17:02.971 "strip_size_kb": 64, 00:17:02.971 "state": "online", 00:17:02.971 "raid_level": "raid5f", 00:17:02.971 "superblock": true, 00:17:02.971 "num_base_bdevs": 4, 00:17:02.971 "num_base_bdevs_discovered": 4, 00:17:02.971 "num_base_bdevs_operational": 4, 00:17:02.971 "base_bdevs_list": [ 00:17:02.971 { 00:17:02.971 "name": "BaseBdev1", 00:17:02.971 "uuid": "2a7f7265-4962-4ab9-a73a-a91d7cdd58ac", 00:17:02.971 "is_configured": true, 00:17:02.971 "data_offset": 2048, 00:17:02.971 "data_size": 63488 00:17:02.971 }, 00:17:02.971 { 00:17:02.971 "name": "BaseBdev2", 00:17:02.971 "uuid": "2b9ea30a-1795-4997-9d4e-3c5981287cfc", 00:17:02.971 "is_configured": true, 00:17:02.971 "data_offset": 2048, 00:17:02.971 "data_size": 63488 00:17:02.971 }, 00:17:02.971 { 00:17:02.971 "name": "BaseBdev3", 00:17:02.971 "uuid": "2051f2d0-e069-41a4-8614-65b68c685d9e", 00:17:02.971 "is_configured": true, 00:17:02.971 "data_offset": 2048, 00:17:02.971 "data_size": 63488 00:17:02.971 }, 00:17:02.971 { 00:17:02.971 "name": "BaseBdev4", 00:17:02.971 "uuid": "3fdf8b23-96e4-4661-b218-2d55cda88222", 00:17:02.971 "is_configured": true, 00:17:02.971 "data_offset": 2048, 00:17:02.971 "data_size": 63488 00:17:02.971 } 00:17:02.971 ] 00:17:02.971 } 00:17:02.971 } 00:17:02.971 }' 00:17:02.971 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:03.231 BaseBdev2 00:17:03.231 BaseBdev3 00:17:03.231 BaseBdev4' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.231 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.490 [2024-11-26 19:03:54.610177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.490 "name": "Existed_Raid", 00:17:03.490 "uuid": "17028276-ddfb-41fd-82cc-fe998b56525d", 00:17:03.490 "strip_size_kb": 64, 00:17:03.490 "state": "online", 00:17:03.490 "raid_level": "raid5f", 00:17:03.490 "superblock": true, 00:17:03.490 "num_base_bdevs": 4, 00:17:03.490 "num_base_bdevs_discovered": 3, 00:17:03.490 "num_base_bdevs_operational": 3, 00:17:03.490 "base_bdevs_list": [ 00:17:03.490 { 00:17:03.490 "name": null, 00:17:03.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.490 "is_configured": false, 00:17:03.490 "data_offset": 0, 00:17:03.490 "data_size": 63488 00:17:03.490 }, 00:17:03.490 { 00:17:03.490 "name": "BaseBdev2", 00:17:03.490 "uuid": "2b9ea30a-1795-4997-9d4e-3c5981287cfc", 00:17:03.490 "is_configured": true, 00:17:03.490 "data_offset": 2048, 00:17:03.490 "data_size": 63488 00:17:03.490 }, 00:17:03.490 { 00:17:03.490 "name": "BaseBdev3", 00:17:03.490 "uuid": "2051f2d0-e069-41a4-8614-65b68c685d9e", 00:17:03.490 "is_configured": true, 00:17:03.490 "data_offset": 2048, 00:17:03.490 "data_size": 63488 00:17:03.490 }, 00:17:03.490 { 00:17:03.490 "name": "BaseBdev4", 00:17:03.490 "uuid": "3fdf8b23-96e4-4661-b218-2d55cda88222", 00:17:03.490 "is_configured": true, 00:17:03.490 "data_offset": 2048, 00:17:03.490 "data_size": 63488 00:17:03.490 } 00:17:03.490 ] 00:17:03.490 }' 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.490 19:03:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.058 [2024-11-26 19:03:55.249990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.058 [2024-11-26 19:03:55.250225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.058 [2024-11-26 19:03:55.342160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.058 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.058 [2024-11-26 19:03:55.402209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.317 [2024-11-26 19:03:55.557334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:04.317 [2024-11-26 19:03:55.557409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.317 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 BaseBdev2 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 [ 00:17:04.577 { 00:17:04.577 "name": "BaseBdev2", 00:17:04.577 "aliases": [ 00:17:04.577 "5ccc16db-1724-4322-9a15-1c998b9c9db6" 00:17:04.577 ], 00:17:04.577 "product_name": "Malloc disk", 00:17:04.577 "block_size": 512, 00:17:04.577 "num_blocks": 65536, 00:17:04.577 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:04.577 "assigned_rate_limits": { 00:17:04.577 "rw_ios_per_sec": 0, 00:17:04.577 "rw_mbytes_per_sec": 0, 00:17:04.577 "r_mbytes_per_sec": 0, 00:17:04.577 "w_mbytes_per_sec": 0 00:17:04.577 }, 00:17:04.577 "claimed": false, 00:17:04.577 "zoned": false, 00:17:04.577 "supported_io_types": { 00:17:04.577 "read": true, 00:17:04.577 "write": true, 00:17:04.577 "unmap": true, 00:17:04.577 "flush": true, 00:17:04.577 "reset": true, 00:17:04.577 "nvme_admin": false, 00:17:04.577 "nvme_io": false, 00:17:04.577 "nvme_io_md": false, 00:17:04.577 "write_zeroes": true, 00:17:04.577 "zcopy": true, 00:17:04.577 "get_zone_info": false, 00:17:04.577 "zone_management": false, 00:17:04.577 "zone_append": false, 00:17:04.577 "compare": false, 00:17:04.577 "compare_and_write": false, 00:17:04.577 "abort": true, 00:17:04.577 "seek_hole": false, 00:17:04.577 "seek_data": false, 00:17:04.577 "copy": true, 00:17:04.577 "nvme_iov_md": false 00:17:04.577 }, 00:17:04.577 "memory_domains": [ 00:17:04.577 { 00:17:04.577 "dma_device_id": "system", 00:17:04.577 "dma_device_type": 1 00:17:04.577 }, 00:17:04.577 { 00:17:04.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.577 "dma_device_type": 2 00:17:04.577 } 00:17:04.577 ], 00:17:04.577 "driver_specific": {} 00:17:04.577 } 00:17:04.577 ] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 BaseBdev3 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 [ 00:17:04.577 { 00:17:04.577 "name": "BaseBdev3", 00:17:04.577 "aliases": [ 00:17:04.577 "07f61e42-5adb-40c7-8279-b5a63f9d4591" 00:17:04.577 ], 00:17:04.577 "product_name": "Malloc disk", 00:17:04.577 "block_size": 512, 00:17:04.577 "num_blocks": 65536, 00:17:04.577 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:04.577 "assigned_rate_limits": { 00:17:04.577 "rw_ios_per_sec": 0, 00:17:04.577 "rw_mbytes_per_sec": 0, 00:17:04.577 "r_mbytes_per_sec": 0, 00:17:04.577 "w_mbytes_per_sec": 0 00:17:04.577 }, 00:17:04.577 "claimed": false, 00:17:04.577 "zoned": false, 00:17:04.577 "supported_io_types": { 00:17:04.577 "read": true, 00:17:04.577 "write": true, 00:17:04.577 "unmap": true, 00:17:04.577 "flush": true, 00:17:04.577 "reset": true, 00:17:04.577 "nvme_admin": false, 00:17:04.577 "nvme_io": false, 00:17:04.577 "nvme_io_md": false, 00:17:04.577 "write_zeroes": true, 00:17:04.577 "zcopy": true, 00:17:04.577 "get_zone_info": false, 00:17:04.577 "zone_management": false, 00:17:04.577 "zone_append": false, 00:17:04.577 "compare": false, 00:17:04.577 "compare_and_write": false, 00:17:04.577 "abort": true, 00:17:04.577 "seek_hole": false, 00:17:04.577 "seek_data": false, 00:17:04.577 "copy": true, 00:17:04.577 "nvme_iov_md": false 00:17:04.577 }, 00:17:04.577 "memory_domains": [ 00:17:04.577 { 00:17:04.577 "dma_device_id": "system", 00:17:04.577 "dma_device_type": 1 00:17:04.577 }, 00:17:04.577 { 00:17:04.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.577 "dma_device_type": 2 00:17:04.577 } 00:17:04.577 ], 00:17:04.577 "driver_specific": {} 00:17:04.577 } 00:17:04.577 ] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.577 BaseBdev4 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.577 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.578 [ 00:17:04.578 { 00:17:04.578 "name": "BaseBdev4", 00:17:04.578 "aliases": [ 00:17:04.578 "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2" 00:17:04.578 ], 00:17:04.578 "product_name": "Malloc disk", 00:17:04.578 "block_size": 512, 00:17:04.578 "num_blocks": 65536, 00:17:04.578 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:04.578 "assigned_rate_limits": { 00:17:04.578 "rw_ios_per_sec": 0, 00:17:04.578 "rw_mbytes_per_sec": 0, 00:17:04.578 "r_mbytes_per_sec": 0, 00:17:04.578 "w_mbytes_per_sec": 0 00:17:04.578 }, 00:17:04.578 "claimed": false, 00:17:04.578 "zoned": false, 00:17:04.578 "supported_io_types": { 00:17:04.578 "read": true, 00:17:04.578 "write": true, 00:17:04.578 "unmap": true, 00:17:04.578 "flush": true, 00:17:04.578 "reset": true, 00:17:04.578 "nvme_admin": false, 00:17:04.578 "nvme_io": false, 00:17:04.578 "nvme_io_md": false, 00:17:04.578 "write_zeroes": true, 00:17:04.578 "zcopy": true, 00:17:04.578 "get_zone_info": false, 00:17:04.578 "zone_management": false, 00:17:04.578 "zone_append": false, 00:17:04.578 "compare": false, 00:17:04.578 "compare_and_write": false, 00:17:04.578 "abort": true, 00:17:04.578 "seek_hole": false, 00:17:04.578 "seek_data": false, 00:17:04.578 "copy": true, 00:17:04.578 "nvme_iov_md": false 00:17:04.578 }, 00:17:04.578 "memory_domains": [ 00:17:04.578 { 00:17:04.578 "dma_device_id": "system", 00:17:04.578 "dma_device_type": 1 00:17:04.578 }, 00:17:04.578 { 00:17:04.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.578 "dma_device_type": 2 00:17:04.578 } 00:17:04.578 ], 00:17:04.578 "driver_specific": {} 00:17:04.578 } 00:17:04.578 ] 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.578 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.578 [2024-11-26 19:03:55.937878] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:04.578 [2024-11-26 19:03:55.937946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:04.578 [2024-11-26 19:03:55.937981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.578 [2024-11-26 19:03:55.940527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:04.578 [2024-11-26 19:03:55.940606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:04.836 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.836 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:04.836 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.836 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:04.836 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.837 "name": "Existed_Raid", 00:17:04.837 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:04.837 "strip_size_kb": 64, 00:17:04.837 "state": "configuring", 00:17:04.837 "raid_level": "raid5f", 00:17:04.837 "superblock": true, 00:17:04.837 "num_base_bdevs": 4, 00:17:04.837 "num_base_bdevs_discovered": 3, 00:17:04.837 "num_base_bdevs_operational": 4, 00:17:04.837 "base_bdevs_list": [ 00:17:04.837 { 00:17:04.837 "name": "BaseBdev1", 00:17:04.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.837 "is_configured": false, 00:17:04.837 "data_offset": 0, 00:17:04.837 "data_size": 0 00:17:04.837 }, 00:17:04.837 { 00:17:04.837 "name": "BaseBdev2", 00:17:04.837 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:04.837 "is_configured": true, 00:17:04.837 "data_offset": 2048, 00:17:04.837 "data_size": 63488 00:17:04.837 }, 00:17:04.837 { 00:17:04.837 "name": "BaseBdev3", 00:17:04.837 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:04.837 "is_configured": true, 00:17:04.837 "data_offset": 2048, 00:17:04.837 "data_size": 63488 00:17:04.837 }, 00:17:04.837 { 00:17:04.837 "name": "BaseBdev4", 00:17:04.837 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:04.837 "is_configured": true, 00:17:04.837 "data_offset": 2048, 00:17:04.837 "data_size": 63488 00:17:04.837 } 00:17:04.837 ] 00:17:04.837 }' 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.837 19:03:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.405 [2024-11-26 19:03:56.470184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.405 "name": "Existed_Raid", 00:17:05.405 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:05.405 "strip_size_kb": 64, 00:17:05.405 "state": "configuring", 00:17:05.405 "raid_level": "raid5f", 00:17:05.405 "superblock": true, 00:17:05.405 "num_base_bdevs": 4, 00:17:05.405 "num_base_bdevs_discovered": 2, 00:17:05.405 "num_base_bdevs_operational": 4, 00:17:05.405 "base_bdevs_list": [ 00:17:05.405 { 00:17:05.405 "name": "BaseBdev1", 00:17:05.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.405 "is_configured": false, 00:17:05.405 "data_offset": 0, 00:17:05.405 "data_size": 0 00:17:05.405 }, 00:17:05.405 { 00:17:05.405 "name": null, 00:17:05.405 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:05.405 "is_configured": false, 00:17:05.405 "data_offset": 0, 00:17:05.405 "data_size": 63488 00:17:05.405 }, 00:17:05.405 { 00:17:05.405 "name": "BaseBdev3", 00:17:05.405 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:05.405 "is_configured": true, 00:17:05.405 "data_offset": 2048, 00:17:05.405 "data_size": 63488 00:17:05.405 }, 00:17:05.405 { 00:17:05.405 "name": "BaseBdev4", 00:17:05.405 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:05.405 "is_configured": true, 00:17:05.405 "data_offset": 2048, 00:17:05.405 "data_size": 63488 00:17:05.405 } 00:17:05.405 ] 00:17:05.405 }' 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.405 19:03:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.663 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.663 19:03:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:05.663 19:03:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.663 19:03:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.663 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.922 [2024-11-26 19:03:57.088837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.922 BaseBdev1 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.922 [ 00:17:05.922 { 00:17:05.922 "name": "BaseBdev1", 00:17:05.922 "aliases": [ 00:17:05.922 "6e3387cf-30fb-43c4-b5da-d19f506e1c5f" 00:17:05.922 ], 00:17:05.922 "product_name": "Malloc disk", 00:17:05.922 "block_size": 512, 00:17:05.922 "num_blocks": 65536, 00:17:05.922 "uuid": "6e3387cf-30fb-43c4-b5da-d19f506e1c5f", 00:17:05.922 "assigned_rate_limits": { 00:17:05.922 "rw_ios_per_sec": 0, 00:17:05.922 "rw_mbytes_per_sec": 0, 00:17:05.922 "r_mbytes_per_sec": 0, 00:17:05.922 "w_mbytes_per_sec": 0 00:17:05.922 }, 00:17:05.922 "claimed": true, 00:17:05.922 "claim_type": "exclusive_write", 00:17:05.922 "zoned": false, 00:17:05.922 "supported_io_types": { 00:17:05.922 "read": true, 00:17:05.922 "write": true, 00:17:05.922 "unmap": true, 00:17:05.922 "flush": true, 00:17:05.922 "reset": true, 00:17:05.922 "nvme_admin": false, 00:17:05.922 "nvme_io": false, 00:17:05.922 "nvme_io_md": false, 00:17:05.922 "write_zeroes": true, 00:17:05.922 "zcopy": true, 00:17:05.922 "get_zone_info": false, 00:17:05.922 "zone_management": false, 00:17:05.922 "zone_append": false, 00:17:05.922 "compare": false, 00:17:05.922 "compare_and_write": false, 00:17:05.922 "abort": true, 00:17:05.922 "seek_hole": false, 00:17:05.922 "seek_data": false, 00:17:05.922 "copy": true, 00:17:05.922 "nvme_iov_md": false 00:17:05.922 }, 00:17:05.922 "memory_domains": [ 00:17:05.922 { 00:17:05.922 "dma_device_id": "system", 00:17:05.922 "dma_device_type": 1 00:17:05.922 }, 00:17:05.922 { 00:17:05.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.922 "dma_device_type": 2 00:17:05.922 } 00:17:05.922 ], 00:17:05.922 "driver_specific": {} 00:17:05.922 } 00:17:05.922 ] 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:05.922 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.923 "name": "Existed_Raid", 00:17:05.923 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:05.923 "strip_size_kb": 64, 00:17:05.923 "state": "configuring", 00:17:05.923 "raid_level": "raid5f", 00:17:05.923 "superblock": true, 00:17:05.923 "num_base_bdevs": 4, 00:17:05.923 "num_base_bdevs_discovered": 3, 00:17:05.923 "num_base_bdevs_operational": 4, 00:17:05.923 "base_bdevs_list": [ 00:17:05.923 { 00:17:05.923 "name": "BaseBdev1", 00:17:05.923 "uuid": "6e3387cf-30fb-43c4-b5da-d19f506e1c5f", 00:17:05.923 "is_configured": true, 00:17:05.923 "data_offset": 2048, 00:17:05.923 "data_size": 63488 00:17:05.923 }, 00:17:05.923 { 00:17:05.923 "name": null, 00:17:05.923 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:05.923 "is_configured": false, 00:17:05.923 "data_offset": 0, 00:17:05.923 "data_size": 63488 00:17:05.923 }, 00:17:05.923 { 00:17:05.923 "name": "BaseBdev3", 00:17:05.923 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:05.923 "is_configured": true, 00:17:05.923 "data_offset": 2048, 00:17:05.923 "data_size": 63488 00:17:05.923 }, 00:17:05.923 { 00:17:05.923 "name": "BaseBdev4", 00:17:05.923 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:05.923 "is_configured": true, 00:17:05.923 "data_offset": 2048, 00:17:05.923 "data_size": 63488 00:17:05.923 } 00:17:05.923 ] 00:17:05.923 }' 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.923 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.490 [2024-11-26 19:03:57.689113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.490 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.490 "name": "Existed_Raid", 00:17:06.491 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:06.491 "strip_size_kb": 64, 00:17:06.491 "state": "configuring", 00:17:06.491 "raid_level": "raid5f", 00:17:06.491 "superblock": true, 00:17:06.491 "num_base_bdevs": 4, 00:17:06.491 "num_base_bdevs_discovered": 2, 00:17:06.491 "num_base_bdevs_operational": 4, 00:17:06.491 "base_bdevs_list": [ 00:17:06.491 { 00:17:06.491 "name": "BaseBdev1", 00:17:06.491 "uuid": "6e3387cf-30fb-43c4-b5da-d19f506e1c5f", 00:17:06.491 "is_configured": true, 00:17:06.491 "data_offset": 2048, 00:17:06.491 "data_size": 63488 00:17:06.491 }, 00:17:06.491 { 00:17:06.491 "name": null, 00:17:06.491 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:06.491 "is_configured": false, 00:17:06.491 "data_offset": 0, 00:17:06.491 "data_size": 63488 00:17:06.491 }, 00:17:06.491 { 00:17:06.491 "name": null, 00:17:06.491 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:06.491 "is_configured": false, 00:17:06.491 "data_offset": 0, 00:17:06.491 "data_size": 63488 00:17:06.491 }, 00:17:06.491 { 00:17:06.491 "name": "BaseBdev4", 00:17:06.491 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:06.491 "is_configured": true, 00:17:06.491 "data_offset": 2048, 00:17:06.491 "data_size": 63488 00:17:06.491 } 00:17:06.491 ] 00:17:06.491 }' 00:17:06.491 19:03:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.491 19:03:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.059 [2024-11-26 19:03:58.313358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.059 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.059 "name": "Existed_Raid", 00:17:07.059 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:07.059 "strip_size_kb": 64, 00:17:07.059 "state": "configuring", 00:17:07.059 "raid_level": "raid5f", 00:17:07.059 "superblock": true, 00:17:07.059 "num_base_bdevs": 4, 00:17:07.059 "num_base_bdevs_discovered": 3, 00:17:07.059 "num_base_bdevs_operational": 4, 00:17:07.059 "base_bdevs_list": [ 00:17:07.059 { 00:17:07.059 "name": "BaseBdev1", 00:17:07.059 "uuid": "6e3387cf-30fb-43c4-b5da-d19f506e1c5f", 00:17:07.059 "is_configured": true, 00:17:07.059 "data_offset": 2048, 00:17:07.059 "data_size": 63488 00:17:07.059 }, 00:17:07.059 { 00:17:07.059 "name": null, 00:17:07.059 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:07.059 "is_configured": false, 00:17:07.059 "data_offset": 0, 00:17:07.059 "data_size": 63488 00:17:07.059 }, 00:17:07.059 { 00:17:07.059 "name": "BaseBdev3", 00:17:07.059 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:07.059 "is_configured": true, 00:17:07.059 "data_offset": 2048, 00:17:07.059 "data_size": 63488 00:17:07.059 }, 00:17:07.059 { 00:17:07.059 "name": "BaseBdev4", 00:17:07.059 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:07.059 "is_configured": true, 00:17:07.059 "data_offset": 2048, 00:17:07.059 "data_size": 63488 00:17:07.059 } 00:17:07.059 ] 00:17:07.059 }' 00:17:07.060 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.060 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.628 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.628 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.628 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:07.628 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.628 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.628 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:07.628 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:07.628 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.628 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.628 [2024-11-26 19:03:58.917633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.887 19:03:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.887 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.887 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.887 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.887 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.887 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.887 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.887 "name": "Existed_Raid", 00:17:07.887 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:07.887 "strip_size_kb": 64, 00:17:07.887 "state": "configuring", 00:17:07.887 "raid_level": "raid5f", 00:17:07.887 "superblock": true, 00:17:07.887 "num_base_bdevs": 4, 00:17:07.887 "num_base_bdevs_discovered": 2, 00:17:07.887 "num_base_bdevs_operational": 4, 00:17:07.887 "base_bdevs_list": [ 00:17:07.887 { 00:17:07.887 "name": null, 00:17:07.887 "uuid": "6e3387cf-30fb-43c4-b5da-d19f506e1c5f", 00:17:07.887 "is_configured": false, 00:17:07.887 "data_offset": 0, 00:17:07.887 "data_size": 63488 00:17:07.887 }, 00:17:07.887 { 00:17:07.887 "name": null, 00:17:07.887 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:07.887 "is_configured": false, 00:17:07.887 "data_offset": 0, 00:17:07.887 "data_size": 63488 00:17:07.887 }, 00:17:07.887 { 00:17:07.887 "name": "BaseBdev3", 00:17:07.887 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:07.887 "is_configured": true, 00:17:07.887 "data_offset": 2048, 00:17:07.887 "data_size": 63488 00:17:07.887 }, 00:17:07.887 { 00:17:07.887 "name": "BaseBdev4", 00:17:07.887 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:07.887 "is_configured": true, 00:17:07.887 "data_offset": 2048, 00:17:07.887 "data_size": 63488 00:17:07.887 } 00:17:07.887 ] 00:17:07.887 }' 00:17:07.887 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.887 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.456 [2024-11-26 19:03:59.585472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.456 "name": "Existed_Raid", 00:17:08.456 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:08.456 "strip_size_kb": 64, 00:17:08.456 "state": "configuring", 00:17:08.456 "raid_level": "raid5f", 00:17:08.456 "superblock": true, 00:17:08.456 "num_base_bdevs": 4, 00:17:08.456 "num_base_bdevs_discovered": 3, 00:17:08.456 "num_base_bdevs_operational": 4, 00:17:08.456 "base_bdevs_list": [ 00:17:08.456 { 00:17:08.456 "name": null, 00:17:08.456 "uuid": "6e3387cf-30fb-43c4-b5da-d19f506e1c5f", 00:17:08.456 "is_configured": false, 00:17:08.456 "data_offset": 0, 00:17:08.456 "data_size": 63488 00:17:08.456 }, 00:17:08.456 { 00:17:08.456 "name": "BaseBdev2", 00:17:08.456 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:08.456 "is_configured": true, 00:17:08.456 "data_offset": 2048, 00:17:08.456 "data_size": 63488 00:17:08.456 }, 00:17:08.456 { 00:17:08.456 "name": "BaseBdev3", 00:17:08.456 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:08.456 "is_configured": true, 00:17:08.456 "data_offset": 2048, 00:17:08.456 "data_size": 63488 00:17:08.456 }, 00:17:08.456 { 00:17:08.456 "name": "BaseBdev4", 00:17:08.456 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:08.456 "is_configured": true, 00:17:08.456 "data_offset": 2048, 00:17:08.456 "data_size": 63488 00:17:08.456 } 00:17:08.456 ] 00:17:08.456 }' 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.456 19:03:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6e3387cf-30fb-43c4-b5da-d19f506e1c5f 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.022 [2024-11-26 19:04:00.264049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:09.022 [2024-11-26 19:04:00.264359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:09.022 [2024-11-26 19:04:00.264379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:09.022 NewBaseBdev 00:17:09.022 [2024-11-26 19:04:00.264709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.022 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.023 [2024-11-26 19:04:00.271196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:09.023 [2024-11-26 19:04:00.271247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:09.023 [2024-11-26 19:04:00.271582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.023 [ 00:17:09.023 { 00:17:09.023 "name": "NewBaseBdev", 00:17:09.023 "aliases": [ 00:17:09.023 "6e3387cf-30fb-43c4-b5da-d19f506e1c5f" 00:17:09.023 ], 00:17:09.023 "product_name": "Malloc disk", 00:17:09.023 "block_size": 512, 00:17:09.023 "num_blocks": 65536, 00:17:09.023 "uuid": "6e3387cf-30fb-43c4-b5da-d19f506e1c5f", 00:17:09.023 "assigned_rate_limits": { 00:17:09.023 "rw_ios_per_sec": 0, 00:17:09.023 "rw_mbytes_per_sec": 0, 00:17:09.023 "r_mbytes_per_sec": 0, 00:17:09.023 "w_mbytes_per_sec": 0 00:17:09.023 }, 00:17:09.023 "claimed": true, 00:17:09.023 "claim_type": "exclusive_write", 00:17:09.023 "zoned": false, 00:17:09.023 "supported_io_types": { 00:17:09.023 "read": true, 00:17:09.023 "write": true, 00:17:09.023 "unmap": true, 00:17:09.023 "flush": true, 00:17:09.023 "reset": true, 00:17:09.023 "nvme_admin": false, 00:17:09.023 "nvme_io": false, 00:17:09.023 "nvme_io_md": false, 00:17:09.023 "write_zeroes": true, 00:17:09.023 "zcopy": true, 00:17:09.023 "get_zone_info": false, 00:17:09.023 "zone_management": false, 00:17:09.023 "zone_append": false, 00:17:09.023 "compare": false, 00:17:09.023 "compare_and_write": false, 00:17:09.023 "abort": true, 00:17:09.023 "seek_hole": false, 00:17:09.023 "seek_data": false, 00:17:09.023 "copy": true, 00:17:09.023 "nvme_iov_md": false 00:17:09.023 }, 00:17:09.023 "memory_domains": [ 00:17:09.023 { 00:17:09.023 "dma_device_id": "system", 00:17:09.023 "dma_device_type": 1 00:17:09.023 }, 00:17:09.023 { 00:17:09.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.023 "dma_device_type": 2 00:17:09.023 } 00:17:09.023 ], 00:17:09.023 "driver_specific": {} 00:17:09.023 } 00:17:09.023 ] 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.023 "name": "Existed_Raid", 00:17:09.023 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:09.023 "strip_size_kb": 64, 00:17:09.023 "state": "online", 00:17:09.023 "raid_level": "raid5f", 00:17:09.023 "superblock": true, 00:17:09.023 "num_base_bdevs": 4, 00:17:09.023 "num_base_bdevs_discovered": 4, 00:17:09.023 "num_base_bdevs_operational": 4, 00:17:09.023 "base_bdevs_list": [ 00:17:09.023 { 00:17:09.023 "name": "NewBaseBdev", 00:17:09.023 "uuid": "6e3387cf-30fb-43c4-b5da-d19f506e1c5f", 00:17:09.023 "is_configured": true, 00:17:09.023 "data_offset": 2048, 00:17:09.023 "data_size": 63488 00:17:09.023 }, 00:17:09.023 { 00:17:09.023 "name": "BaseBdev2", 00:17:09.023 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:09.023 "is_configured": true, 00:17:09.023 "data_offset": 2048, 00:17:09.023 "data_size": 63488 00:17:09.023 }, 00:17:09.023 { 00:17:09.023 "name": "BaseBdev3", 00:17:09.023 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:09.023 "is_configured": true, 00:17:09.023 "data_offset": 2048, 00:17:09.023 "data_size": 63488 00:17:09.023 }, 00:17:09.023 { 00:17:09.023 "name": "BaseBdev4", 00:17:09.023 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:09.023 "is_configured": true, 00:17:09.023 "data_offset": 2048, 00:17:09.023 "data_size": 63488 00:17:09.023 } 00:17:09.023 ] 00:17:09.023 }' 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.023 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.589 [2024-11-26 19:04:00.851421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.589 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:09.589 "name": "Existed_Raid", 00:17:09.589 "aliases": [ 00:17:09.589 "41b220bb-9531-4961-b8a0-b49e847d3f18" 00:17:09.589 ], 00:17:09.589 "product_name": "Raid Volume", 00:17:09.589 "block_size": 512, 00:17:09.589 "num_blocks": 190464, 00:17:09.589 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:09.589 "assigned_rate_limits": { 00:17:09.589 "rw_ios_per_sec": 0, 00:17:09.589 "rw_mbytes_per_sec": 0, 00:17:09.589 "r_mbytes_per_sec": 0, 00:17:09.589 "w_mbytes_per_sec": 0 00:17:09.589 }, 00:17:09.589 "claimed": false, 00:17:09.589 "zoned": false, 00:17:09.589 "supported_io_types": { 00:17:09.589 "read": true, 00:17:09.589 "write": true, 00:17:09.589 "unmap": false, 00:17:09.589 "flush": false, 00:17:09.589 "reset": true, 00:17:09.589 "nvme_admin": false, 00:17:09.589 "nvme_io": false, 00:17:09.589 "nvme_io_md": false, 00:17:09.589 "write_zeroes": true, 00:17:09.589 "zcopy": false, 00:17:09.589 "get_zone_info": false, 00:17:09.589 "zone_management": false, 00:17:09.589 "zone_append": false, 00:17:09.589 "compare": false, 00:17:09.589 "compare_and_write": false, 00:17:09.589 "abort": false, 00:17:09.589 "seek_hole": false, 00:17:09.589 "seek_data": false, 00:17:09.589 "copy": false, 00:17:09.590 "nvme_iov_md": false 00:17:09.590 }, 00:17:09.590 "driver_specific": { 00:17:09.590 "raid": { 00:17:09.590 "uuid": "41b220bb-9531-4961-b8a0-b49e847d3f18", 00:17:09.590 "strip_size_kb": 64, 00:17:09.590 "state": "online", 00:17:09.590 "raid_level": "raid5f", 00:17:09.590 "superblock": true, 00:17:09.590 "num_base_bdevs": 4, 00:17:09.590 "num_base_bdevs_discovered": 4, 00:17:09.590 "num_base_bdevs_operational": 4, 00:17:09.590 "base_bdevs_list": [ 00:17:09.590 { 00:17:09.590 "name": "NewBaseBdev", 00:17:09.590 "uuid": "6e3387cf-30fb-43c4-b5da-d19f506e1c5f", 00:17:09.590 "is_configured": true, 00:17:09.590 "data_offset": 2048, 00:17:09.590 "data_size": 63488 00:17:09.590 }, 00:17:09.590 { 00:17:09.590 "name": "BaseBdev2", 00:17:09.590 "uuid": "5ccc16db-1724-4322-9a15-1c998b9c9db6", 00:17:09.590 "is_configured": true, 00:17:09.590 "data_offset": 2048, 00:17:09.590 "data_size": 63488 00:17:09.590 }, 00:17:09.590 { 00:17:09.590 "name": "BaseBdev3", 00:17:09.590 "uuid": "07f61e42-5adb-40c7-8279-b5a63f9d4591", 00:17:09.590 "is_configured": true, 00:17:09.590 "data_offset": 2048, 00:17:09.590 "data_size": 63488 00:17:09.590 }, 00:17:09.590 { 00:17:09.590 "name": "BaseBdev4", 00:17:09.590 "uuid": "3d9ba342-94a2-4d13-a7dc-1e0a9db548d2", 00:17:09.590 "is_configured": true, 00:17:09.590 "data_offset": 2048, 00:17:09.590 "data_size": 63488 00:17:09.590 } 00:17:09.590 ] 00:17:09.590 } 00:17:09.590 } 00:17:09.590 }' 00:17:09.590 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.590 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:09.590 BaseBdev2 00:17:09.590 BaseBdev3 00:17:09.590 BaseBdev4' 00:17:09.590 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.852 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:09.852 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.852 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:09.852 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.852 19:04:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.852 19:04:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.852 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.853 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.853 [2024-11-26 19:04:01.211206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:09.853 [2024-11-26 19:04:01.211248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.853 [2024-11-26 19:04:01.211370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.853 [2024-11-26 19:04:01.211786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.853 [2024-11-26 19:04:01.211830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:10.120 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.120 19:04:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83894 00:17:10.120 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83894 ']' 00:17:10.120 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83894 00:17:10.120 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:10.121 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.121 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83894 00:17:10.121 killing process with pid 83894 00:17:10.121 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.121 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.121 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83894' 00:17:10.121 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83894 00:17:10.121 [2024-11-26 19:04:01.249761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.121 19:04:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83894 00:17:10.380 [2024-11-26 19:04:01.607006] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.318 19:04:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:11.318 00:17:11.318 real 0m13.085s 00:17:11.318 user 0m21.643s 00:17:11.318 sys 0m1.904s 00:17:11.318 19:04:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.318 19:04:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.318 ************************************ 00:17:11.318 END TEST raid5f_state_function_test_sb 00:17:11.318 ************************************ 00:17:11.578 19:04:02 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:11.578 19:04:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:11.578 19:04:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.578 19:04:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.578 ************************************ 00:17:11.578 START TEST raid5f_superblock_test 00:17:11.578 ************************************ 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84571 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84571 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84571 ']' 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.578 19:04:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.578 [2024-11-26 19:04:02.840322] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:17:11.579 [2024-11-26 19:04:02.840506] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84571 ] 00:17:11.838 [2024-11-26 19:04:03.028211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.838 [2024-11-26 19:04:03.165454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.097 [2024-11-26 19:04:03.379625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.097 [2024-11-26 19:04:03.379709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.664 malloc1 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.664 [2024-11-26 19:04:03.839200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:12.664 [2024-11-26 19:04:03.839282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.664 [2024-11-26 19:04:03.839327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:12.664 [2024-11-26 19:04:03.839352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.664 [2024-11-26 19:04:03.842662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.664 [2024-11-26 19:04:03.842738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:12.664 pt1 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.664 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.665 malloc2 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.665 [2024-11-26 19:04:03.899704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:12.665 [2024-11-26 19:04:03.899780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.665 [2024-11-26 19:04:03.899839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:12.665 [2024-11-26 19:04:03.899868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.665 [2024-11-26 19:04:03.903012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.665 [2024-11-26 19:04:03.903054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:12.665 pt2 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.665 malloc3 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.665 [2024-11-26 19:04:03.968605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:12.665 [2024-11-26 19:04:03.968667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.665 [2024-11-26 19:04:03.968699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:12.665 [2024-11-26 19:04:03.968715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.665 [2024-11-26 19:04:03.971624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.665 [2024-11-26 19:04:03.971666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:12.665 pt3 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.665 19:04:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.665 malloc4 00:17:12.665 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.665 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:12.665 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.665 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.665 [2024-11-26 19:04:04.025520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:12.665 [2024-11-26 19:04:04.025599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.665 [2024-11-26 19:04:04.025631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:12.665 [2024-11-26 19:04:04.025647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.665 [2024-11-26 19:04:04.028537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.665 [2024-11-26 19:04:04.028578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:12.924 pt4 00:17:12.924 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.924 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.924 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.924 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:12.924 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.925 [2024-11-26 19:04:04.037562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:12.925 [2024-11-26 19:04:04.040079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.925 [2024-11-26 19:04:04.040211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:12.925 [2024-11-26 19:04:04.040301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:12.925 [2024-11-26 19:04:04.040566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:12.925 [2024-11-26 19:04:04.040589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:12.925 [2024-11-26 19:04:04.040925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:12.925 [2024-11-26 19:04:04.047695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:12.925 [2024-11-26 19:04:04.047733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:12.925 [2024-11-26 19:04:04.048010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.925 "name": "raid_bdev1", 00:17:12.925 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:12.925 "strip_size_kb": 64, 00:17:12.925 "state": "online", 00:17:12.925 "raid_level": "raid5f", 00:17:12.925 "superblock": true, 00:17:12.925 "num_base_bdevs": 4, 00:17:12.925 "num_base_bdevs_discovered": 4, 00:17:12.925 "num_base_bdevs_operational": 4, 00:17:12.925 "base_bdevs_list": [ 00:17:12.925 { 00:17:12.925 "name": "pt1", 00:17:12.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.925 "is_configured": true, 00:17:12.925 "data_offset": 2048, 00:17:12.925 "data_size": 63488 00:17:12.925 }, 00:17:12.925 { 00:17:12.925 "name": "pt2", 00:17:12.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.925 "is_configured": true, 00:17:12.925 "data_offset": 2048, 00:17:12.925 "data_size": 63488 00:17:12.925 }, 00:17:12.925 { 00:17:12.925 "name": "pt3", 00:17:12.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.925 "is_configured": true, 00:17:12.925 "data_offset": 2048, 00:17:12.925 "data_size": 63488 00:17:12.925 }, 00:17:12.925 { 00:17:12.925 "name": "pt4", 00:17:12.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:12.925 "is_configured": true, 00:17:12.925 "data_offset": 2048, 00:17:12.925 "data_size": 63488 00:17:12.925 } 00:17:12.925 ] 00:17:12.925 }' 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.925 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.494 [2024-11-26 19:04:04.611949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.494 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:13.494 "name": "raid_bdev1", 00:17:13.494 "aliases": [ 00:17:13.494 "d3d56e45-b77c-4415-b344-23197aaf9058" 00:17:13.494 ], 00:17:13.494 "product_name": "Raid Volume", 00:17:13.494 "block_size": 512, 00:17:13.494 "num_blocks": 190464, 00:17:13.494 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:13.494 "assigned_rate_limits": { 00:17:13.494 "rw_ios_per_sec": 0, 00:17:13.494 "rw_mbytes_per_sec": 0, 00:17:13.494 "r_mbytes_per_sec": 0, 00:17:13.494 "w_mbytes_per_sec": 0 00:17:13.494 }, 00:17:13.494 "claimed": false, 00:17:13.494 "zoned": false, 00:17:13.494 "supported_io_types": { 00:17:13.494 "read": true, 00:17:13.494 "write": true, 00:17:13.494 "unmap": false, 00:17:13.494 "flush": false, 00:17:13.494 "reset": true, 00:17:13.494 "nvme_admin": false, 00:17:13.495 "nvme_io": false, 00:17:13.495 "nvme_io_md": false, 00:17:13.495 "write_zeroes": true, 00:17:13.495 "zcopy": false, 00:17:13.495 "get_zone_info": false, 00:17:13.495 "zone_management": false, 00:17:13.495 "zone_append": false, 00:17:13.495 "compare": false, 00:17:13.495 "compare_and_write": false, 00:17:13.495 "abort": false, 00:17:13.495 "seek_hole": false, 00:17:13.495 "seek_data": false, 00:17:13.495 "copy": false, 00:17:13.495 "nvme_iov_md": false 00:17:13.495 }, 00:17:13.495 "driver_specific": { 00:17:13.495 "raid": { 00:17:13.495 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:13.495 "strip_size_kb": 64, 00:17:13.495 "state": "online", 00:17:13.495 "raid_level": "raid5f", 00:17:13.495 "superblock": true, 00:17:13.495 "num_base_bdevs": 4, 00:17:13.495 "num_base_bdevs_discovered": 4, 00:17:13.495 "num_base_bdevs_operational": 4, 00:17:13.495 "base_bdevs_list": [ 00:17:13.495 { 00:17:13.495 "name": "pt1", 00:17:13.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.495 "is_configured": true, 00:17:13.495 "data_offset": 2048, 00:17:13.495 "data_size": 63488 00:17:13.495 }, 00:17:13.495 { 00:17:13.495 "name": "pt2", 00:17:13.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.495 "is_configured": true, 00:17:13.495 "data_offset": 2048, 00:17:13.495 "data_size": 63488 00:17:13.495 }, 00:17:13.495 { 00:17:13.495 "name": "pt3", 00:17:13.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.495 "is_configured": true, 00:17:13.495 "data_offset": 2048, 00:17:13.495 "data_size": 63488 00:17:13.495 }, 00:17:13.495 { 00:17:13.495 "name": "pt4", 00:17:13.495 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:13.495 "is_configured": true, 00:17:13.495 "data_offset": 2048, 00:17:13.495 "data_size": 63488 00:17:13.495 } 00:17:13.495 ] 00:17:13.495 } 00:17:13.495 } 00:17:13.495 }' 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:13.495 pt2 00:17:13.495 pt3 00:17:13.495 pt4' 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.495 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.755 19:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.755 [2024-11-26 19:04:05.008066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d3d56e45-b77c-4415-b344-23197aaf9058 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d3d56e45-b77c-4415-b344-23197aaf9058 ']' 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.755 [2024-11-26 19:04:05.051798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.755 [2024-11-26 19:04:05.051831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.755 [2024-11-26 19:04:05.051954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.755 [2024-11-26 19:04:05.052068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.755 [2024-11-26 19:04:05.052093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.755 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 [2024-11-26 19:04:05.211908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:14.015 [2024-11-26 19:04:05.214477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:14.015 [2024-11-26 19:04:05.214553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:14.015 [2024-11-26 19:04:05.214620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:14.015 [2024-11-26 19:04:05.214695] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:14.015 [2024-11-26 19:04:05.214763] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:14.015 [2024-11-26 19:04:05.214796] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:14.015 [2024-11-26 19:04:05.214827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:14.015 [2024-11-26 19:04:05.214850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.015 [2024-11-26 19:04:05.214867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:14.015 request: 00:17:14.015 { 00:17:14.015 "name": "raid_bdev1", 00:17:14.015 "raid_level": "raid5f", 00:17:14.015 "base_bdevs": [ 00:17:14.015 "malloc1", 00:17:14.015 "malloc2", 00:17:14.015 "malloc3", 00:17:14.015 "malloc4" 00:17:14.015 ], 00:17:14.015 "strip_size_kb": 64, 00:17:14.015 "superblock": false, 00:17:14.015 "method": "bdev_raid_create", 00:17:14.015 "req_id": 1 00:17:14.015 } 00:17:14.015 Got JSON-RPC error response 00:17:14.015 response: 00:17:14.015 { 00:17:14.015 "code": -17, 00:17:14.015 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:14.015 } 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 [2024-11-26 19:04:05.279946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.015 [2024-11-26 19:04:05.280026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.015 [2024-11-26 19:04:05.280060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:14.015 [2024-11-26 19:04:05.280079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.015 [2024-11-26 19:04:05.283363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.015 [2024-11-26 19:04:05.283407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.015 [2024-11-26 19:04:05.283513] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:14.015 [2024-11-26 19:04:05.283603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.015 pt1 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.015 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.015 "name": "raid_bdev1", 00:17:14.015 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:14.015 "strip_size_kb": 64, 00:17:14.015 "state": "configuring", 00:17:14.015 "raid_level": "raid5f", 00:17:14.015 "superblock": true, 00:17:14.015 "num_base_bdevs": 4, 00:17:14.015 "num_base_bdevs_discovered": 1, 00:17:14.015 "num_base_bdevs_operational": 4, 00:17:14.015 "base_bdevs_list": [ 00:17:14.015 { 00:17:14.015 "name": "pt1", 00:17:14.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.015 "is_configured": true, 00:17:14.015 "data_offset": 2048, 00:17:14.015 "data_size": 63488 00:17:14.015 }, 00:17:14.015 { 00:17:14.015 "name": null, 00:17:14.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.015 "is_configured": false, 00:17:14.015 "data_offset": 2048, 00:17:14.015 "data_size": 63488 00:17:14.015 }, 00:17:14.015 { 00:17:14.015 "name": null, 00:17:14.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.015 "is_configured": false, 00:17:14.015 "data_offset": 2048, 00:17:14.015 "data_size": 63488 00:17:14.015 }, 00:17:14.016 { 00:17:14.016 "name": null, 00:17:14.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.016 "is_configured": false, 00:17:14.016 "data_offset": 2048, 00:17:14.016 "data_size": 63488 00:17:14.016 } 00:17:14.016 ] 00:17:14.016 }' 00:17:14.016 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.016 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.584 [2024-11-26 19:04:05.816265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:14.584 [2024-11-26 19:04:05.816416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.584 [2024-11-26 19:04:05.816462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:14.584 [2024-11-26 19:04:05.816479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.584 [2024-11-26 19:04:05.817153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.584 [2024-11-26 19:04:05.817190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:14.584 [2024-11-26 19:04:05.817302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:14.584 [2024-11-26 19:04:05.817343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.584 pt2 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.584 [2024-11-26 19:04:05.824238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.584 "name": "raid_bdev1", 00:17:14.584 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:14.584 "strip_size_kb": 64, 00:17:14.584 "state": "configuring", 00:17:14.584 "raid_level": "raid5f", 00:17:14.584 "superblock": true, 00:17:14.584 "num_base_bdevs": 4, 00:17:14.584 "num_base_bdevs_discovered": 1, 00:17:14.584 "num_base_bdevs_operational": 4, 00:17:14.584 "base_bdevs_list": [ 00:17:14.584 { 00:17:14.584 "name": "pt1", 00:17:14.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.584 "is_configured": true, 00:17:14.584 "data_offset": 2048, 00:17:14.584 "data_size": 63488 00:17:14.584 }, 00:17:14.584 { 00:17:14.584 "name": null, 00:17:14.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.584 "is_configured": false, 00:17:14.584 "data_offset": 0, 00:17:14.584 "data_size": 63488 00:17:14.584 }, 00:17:14.584 { 00:17:14.584 "name": null, 00:17:14.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.584 "is_configured": false, 00:17:14.584 "data_offset": 2048, 00:17:14.584 "data_size": 63488 00:17:14.584 }, 00:17:14.584 { 00:17:14.584 "name": null, 00:17:14.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.584 "is_configured": false, 00:17:14.584 "data_offset": 2048, 00:17:14.584 "data_size": 63488 00:17:14.584 } 00:17:14.584 ] 00:17:14.584 }' 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.584 19:04:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.150 [2024-11-26 19:04:06.336405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.150 [2024-11-26 19:04:06.336487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.150 [2024-11-26 19:04:06.336519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:15.150 [2024-11-26 19:04:06.336534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.150 [2024-11-26 19:04:06.337138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.150 [2024-11-26 19:04:06.337164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.150 [2024-11-26 19:04:06.337275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:15.150 [2024-11-26 19:04:06.337309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.150 pt2 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.150 [2024-11-26 19:04:06.344369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:15.150 [2024-11-26 19:04:06.344449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.150 [2024-11-26 19:04:06.344498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:15.150 [2024-11-26 19:04:06.344514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.150 [2024-11-26 19:04:06.344970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.150 [2024-11-26 19:04:06.345000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:15.150 [2024-11-26 19:04:06.345079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:15.150 [2024-11-26 19:04:06.345115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:15.150 pt3 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.150 [2024-11-26 19:04:06.352333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:15.150 [2024-11-26 19:04:06.352382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.150 [2024-11-26 19:04:06.352406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:15.150 [2024-11-26 19:04:06.352420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.150 [2024-11-26 19:04:06.352872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.150 [2024-11-26 19:04:06.352920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:15.150 [2024-11-26 19:04:06.353003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:15.150 [2024-11-26 19:04:06.353036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:15.150 [2024-11-26 19:04:06.353209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:15.150 [2024-11-26 19:04:06.353225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:15.150 [2024-11-26 19:04:06.353534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:15.150 [2024-11-26 19:04:06.360036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:15.150 [2024-11-26 19:04:06.360072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:15.150 [2024-11-26 19:04:06.360289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.150 pt4 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.150 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.150 "name": "raid_bdev1", 00:17:15.150 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:15.150 "strip_size_kb": 64, 00:17:15.150 "state": "online", 00:17:15.150 "raid_level": "raid5f", 00:17:15.150 "superblock": true, 00:17:15.150 "num_base_bdevs": 4, 00:17:15.151 "num_base_bdevs_discovered": 4, 00:17:15.151 "num_base_bdevs_operational": 4, 00:17:15.151 "base_bdevs_list": [ 00:17:15.151 { 00:17:15.151 "name": "pt1", 00:17:15.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.151 "is_configured": true, 00:17:15.151 "data_offset": 2048, 00:17:15.151 "data_size": 63488 00:17:15.151 }, 00:17:15.151 { 00:17:15.151 "name": "pt2", 00:17:15.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.151 "is_configured": true, 00:17:15.151 "data_offset": 2048, 00:17:15.151 "data_size": 63488 00:17:15.151 }, 00:17:15.151 { 00:17:15.151 "name": "pt3", 00:17:15.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.151 "is_configured": true, 00:17:15.151 "data_offset": 2048, 00:17:15.151 "data_size": 63488 00:17:15.151 }, 00:17:15.151 { 00:17:15.151 "name": "pt4", 00:17:15.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.151 "is_configured": true, 00:17:15.151 "data_offset": 2048, 00:17:15.151 "data_size": 63488 00:17:15.151 } 00:17:15.151 ] 00:17:15.151 }' 00:17:15.151 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.151 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.717 [2024-11-26 19:04:06.892154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.717 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.717 "name": "raid_bdev1", 00:17:15.717 "aliases": [ 00:17:15.717 "d3d56e45-b77c-4415-b344-23197aaf9058" 00:17:15.717 ], 00:17:15.717 "product_name": "Raid Volume", 00:17:15.717 "block_size": 512, 00:17:15.717 "num_blocks": 190464, 00:17:15.717 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:15.717 "assigned_rate_limits": { 00:17:15.717 "rw_ios_per_sec": 0, 00:17:15.717 "rw_mbytes_per_sec": 0, 00:17:15.717 "r_mbytes_per_sec": 0, 00:17:15.717 "w_mbytes_per_sec": 0 00:17:15.717 }, 00:17:15.717 "claimed": false, 00:17:15.717 "zoned": false, 00:17:15.717 "supported_io_types": { 00:17:15.717 "read": true, 00:17:15.717 "write": true, 00:17:15.717 "unmap": false, 00:17:15.717 "flush": false, 00:17:15.717 "reset": true, 00:17:15.717 "nvme_admin": false, 00:17:15.717 "nvme_io": false, 00:17:15.717 "nvme_io_md": false, 00:17:15.717 "write_zeroes": true, 00:17:15.717 "zcopy": false, 00:17:15.717 "get_zone_info": false, 00:17:15.717 "zone_management": false, 00:17:15.717 "zone_append": false, 00:17:15.717 "compare": false, 00:17:15.717 "compare_and_write": false, 00:17:15.717 "abort": false, 00:17:15.717 "seek_hole": false, 00:17:15.717 "seek_data": false, 00:17:15.717 "copy": false, 00:17:15.717 "nvme_iov_md": false 00:17:15.717 }, 00:17:15.717 "driver_specific": { 00:17:15.717 "raid": { 00:17:15.717 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:15.717 "strip_size_kb": 64, 00:17:15.717 "state": "online", 00:17:15.717 "raid_level": "raid5f", 00:17:15.717 "superblock": true, 00:17:15.717 "num_base_bdevs": 4, 00:17:15.717 "num_base_bdevs_discovered": 4, 00:17:15.717 "num_base_bdevs_operational": 4, 00:17:15.717 "base_bdevs_list": [ 00:17:15.717 { 00:17:15.717 "name": "pt1", 00:17:15.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.717 "is_configured": true, 00:17:15.717 "data_offset": 2048, 00:17:15.717 "data_size": 63488 00:17:15.717 }, 00:17:15.717 { 00:17:15.717 "name": "pt2", 00:17:15.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.717 "is_configured": true, 00:17:15.717 "data_offset": 2048, 00:17:15.717 "data_size": 63488 00:17:15.718 }, 00:17:15.718 { 00:17:15.718 "name": "pt3", 00:17:15.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.718 "is_configured": true, 00:17:15.718 "data_offset": 2048, 00:17:15.718 "data_size": 63488 00:17:15.718 }, 00:17:15.718 { 00:17:15.718 "name": "pt4", 00:17:15.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.718 "is_configured": true, 00:17:15.718 "data_offset": 2048, 00:17:15.718 "data_size": 63488 00:17:15.718 } 00:17:15.718 ] 00:17:15.718 } 00:17:15.718 } 00:17:15.718 }' 00:17:15.718 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.718 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:15.718 pt2 00:17:15.718 pt3 00:17:15.718 pt4' 00:17:15.718 19:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.718 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:15.718 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.718 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:15.718 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.718 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.718 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.718 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.977 [2024-11-26 19:04:07.252205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d3d56e45-b77c-4415-b344-23197aaf9058 '!=' d3d56e45-b77c-4415-b344-23197aaf9058 ']' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.977 [2024-11-26 19:04:07.308082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.977 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.236 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.236 "name": "raid_bdev1", 00:17:16.236 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:16.236 "strip_size_kb": 64, 00:17:16.236 "state": "online", 00:17:16.236 "raid_level": "raid5f", 00:17:16.236 "superblock": true, 00:17:16.236 "num_base_bdevs": 4, 00:17:16.236 "num_base_bdevs_discovered": 3, 00:17:16.236 "num_base_bdevs_operational": 3, 00:17:16.236 "base_bdevs_list": [ 00:17:16.236 { 00:17:16.236 "name": null, 00:17:16.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.236 "is_configured": false, 00:17:16.236 "data_offset": 0, 00:17:16.236 "data_size": 63488 00:17:16.236 }, 00:17:16.236 { 00:17:16.236 "name": "pt2", 00:17:16.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.236 "is_configured": true, 00:17:16.236 "data_offset": 2048, 00:17:16.236 "data_size": 63488 00:17:16.236 }, 00:17:16.236 { 00:17:16.236 "name": "pt3", 00:17:16.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.236 "is_configured": true, 00:17:16.236 "data_offset": 2048, 00:17:16.236 "data_size": 63488 00:17:16.236 }, 00:17:16.236 { 00:17:16.236 "name": "pt4", 00:17:16.236 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.236 "is_configured": true, 00:17:16.236 "data_offset": 2048, 00:17:16.236 "data_size": 63488 00:17:16.236 } 00:17:16.236 ] 00:17:16.236 }' 00:17:16.236 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.236 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.495 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:16.495 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.495 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.495 [2024-11-26 19:04:07.840170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.495 [2024-11-26 19:04:07.840214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.495 [2024-11-26 19:04:07.840320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.495 [2024-11-26 19:04:07.840427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.495 [2024-11-26 19:04:07.840444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:16.495 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.495 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.495 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.495 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:16.495 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.495 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.754 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 [2024-11-26 19:04:07.936146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.754 [2024-11-26 19:04:07.936205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.755 [2024-11-26 19:04:07.936245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:16.755 [2024-11-26 19:04:07.936272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.755 [2024-11-26 19:04:07.939216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.755 [2024-11-26 19:04:07.939259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.755 [2024-11-26 19:04:07.939364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:16.755 [2024-11-26 19:04:07.939427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.755 pt2 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.755 19:04:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.755 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.755 "name": "raid_bdev1", 00:17:16.755 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:16.755 "strip_size_kb": 64, 00:17:16.755 "state": "configuring", 00:17:16.755 "raid_level": "raid5f", 00:17:16.755 "superblock": true, 00:17:16.755 "num_base_bdevs": 4, 00:17:16.755 "num_base_bdevs_discovered": 1, 00:17:16.755 "num_base_bdevs_operational": 3, 00:17:16.755 "base_bdevs_list": [ 00:17:16.755 { 00:17:16.755 "name": null, 00:17:16.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.755 "is_configured": false, 00:17:16.755 "data_offset": 2048, 00:17:16.755 "data_size": 63488 00:17:16.755 }, 00:17:16.755 { 00:17:16.755 "name": "pt2", 00:17:16.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.755 "is_configured": true, 00:17:16.755 "data_offset": 2048, 00:17:16.755 "data_size": 63488 00:17:16.755 }, 00:17:16.755 { 00:17:16.755 "name": null, 00:17:16.755 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.755 "is_configured": false, 00:17:16.755 "data_offset": 2048, 00:17:16.755 "data_size": 63488 00:17:16.755 }, 00:17:16.755 { 00:17:16.755 "name": null, 00:17:16.755 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.755 "is_configured": false, 00:17:16.755 "data_offset": 2048, 00:17:16.755 "data_size": 63488 00:17:16.755 } 00:17:16.755 ] 00:17:16.755 }' 00:17:16.755 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.755 19:04:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.321 [2024-11-26 19:04:08.480345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:17.321 [2024-11-26 19:04:08.480448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.321 [2024-11-26 19:04:08.480487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:17.321 [2024-11-26 19:04:08.480503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.321 [2024-11-26 19:04:08.481212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.321 [2024-11-26 19:04:08.481249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:17.321 [2024-11-26 19:04:08.481365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:17.321 [2024-11-26 19:04:08.481399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:17.321 pt3 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.321 "name": "raid_bdev1", 00:17:17.321 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:17.321 "strip_size_kb": 64, 00:17:17.321 "state": "configuring", 00:17:17.321 "raid_level": "raid5f", 00:17:17.321 "superblock": true, 00:17:17.321 "num_base_bdevs": 4, 00:17:17.321 "num_base_bdevs_discovered": 2, 00:17:17.321 "num_base_bdevs_operational": 3, 00:17:17.321 "base_bdevs_list": [ 00:17:17.321 { 00:17:17.321 "name": null, 00:17:17.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.321 "is_configured": false, 00:17:17.321 "data_offset": 2048, 00:17:17.321 "data_size": 63488 00:17:17.321 }, 00:17:17.321 { 00:17:17.321 "name": "pt2", 00:17:17.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.321 "is_configured": true, 00:17:17.321 "data_offset": 2048, 00:17:17.321 "data_size": 63488 00:17:17.321 }, 00:17:17.321 { 00:17:17.321 "name": "pt3", 00:17:17.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.321 "is_configured": true, 00:17:17.321 "data_offset": 2048, 00:17:17.321 "data_size": 63488 00:17:17.321 }, 00:17:17.321 { 00:17:17.321 "name": null, 00:17:17.321 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.321 "is_configured": false, 00:17:17.321 "data_offset": 2048, 00:17:17.321 "data_size": 63488 00:17:17.321 } 00:17:17.321 ] 00:17:17.321 }' 00:17:17.321 19:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.322 19:04:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.889 [2024-11-26 19:04:09.012533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:17.889 [2024-11-26 19:04:09.012625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.889 [2024-11-26 19:04:09.012670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:17.889 [2024-11-26 19:04:09.012688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.889 [2024-11-26 19:04:09.013300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.889 [2024-11-26 19:04:09.013332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:17.889 [2024-11-26 19:04:09.013445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:17.889 [2024-11-26 19:04:09.013487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:17.889 [2024-11-26 19:04:09.013668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:17.889 [2024-11-26 19:04:09.013690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:17.889 [2024-11-26 19:04:09.014028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:17.889 [2024-11-26 19:04:09.021028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:17.889 [2024-11-26 19:04:09.021079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:17.889 [2024-11-26 19:04:09.021419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.889 pt4 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.889 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.889 "name": "raid_bdev1", 00:17:17.889 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:17.889 "strip_size_kb": 64, 00:17:17.889 "state": "online", 00:17:17.889 "raid_level": "raid5f", 00:17:17.889 "superblock": true, 00:17:17.889 "num_base_bdevs": 4, 00:17:17.889 "num_base_bdevs_discovered": 3, 00:17:17.889 "num_base_bdevs_operational": 3, 00:17:17.889 "base_bdevs_list": [ 00:17:17.890 { 00:17:17.890 "name": null, 00:17:17.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.890 "is_configured": false, 00:17:17.890 "data_offset": 2048, 00:17:17.890 "data_size": 63488 00:17:17.890 }, 00:17:17.890 { 00:17:17.890 "name": "pt2", 00:17:17.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.890 "is_configured": true, 00:17:17.890 "data_offset": 2048, 00:17:17.890 "data_size": 63488 00:17:17.890 }, 00:17:17.890 { 00:17:17.890 "name": "pt3", 00:17:17.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.890 "is_configured": true, 00:17:17.890 "data_offset": 2048, 00:17:17.890 "data_size": 63488 00:17:17.890 }, 00:17:17.890 { 00:17:17.890 "name": "pt4", 00:17:17.890 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.890 "is_configured": true, 00:17:17.890 "data_offset": 2048, 00:17:17.890 "data_size": 63488 00:17:17.890 } 00:17:17.890 ] 00:17:17.890 }' 00:17:17.890 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.890 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 [2024-11-26 19:04:09.552906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.458 [2024-11-26 19:04:09.552961] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.458 [2024-11-26 19:04:09.553067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.458 [2024-11-26 19:04:09.553167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.458 [2024-11-26 19:04:09.553188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 [2024-11-26 19:04:09.620916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.458 [2024-11-26 19:04:09.621001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.458 [2024-11-26 19:04:09.621037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:18.458 [2024-11-26 19:04:09.621058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.458 [2024-11-26 19:04:09.624087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.458 [2024-11-26 19:04:09.624261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.458 [2024-11-26 19:04:09.624389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:18.458 [2024-11-26 19:04:09.624458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:18.458 [2024-11-26 19:04:09.624627] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:18.458 [2024-11-26 19:04:09.624650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.458 [2024-11-26 19:04:09.624672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:18.458 [2024-11-26 19:04:09.624777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.458 pt1 00:17:18.458 [2024-11-26 19:04:09.624970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:18.458 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.459 "name": "raid_bdev1", 00:17:18.459 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:18.459 "strip_size_kb": 64, 00:17:18.459 "state": "configuring", 00:17:18.459 "raid_level": "raid5f", 00:17:18.459 "superblock": true, 00:17:18.459 "num_base_bdevs": 4, 00:17:18.459 "num_base_bdevs_discovered": 2, 00:17:18.459 "num_base_bdevs_operational": 3, 00:17:18.459 "base_bdevs_list": [ 00:17:18.459 { 00:17:18.459 "name": null, 00:17:18.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.459 "is_configured": false, 00:17:18.459 "data_offset": 2048, 00:17:18.459 "data_size": 63488 00:17:18.459 }, 00:17:18.459 { 00:17:18.459 "name": "pt2", 00:17:18.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.459 "is_configured": true, 00:17:18.459 "data_offset": 2048, 00:17:18.459 "data_size": 63488 00:17:18.459 }, 00:17:18.459 { 00:17:18.459 "name": "pt3", 00:17:18.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.459 "is_configured": true, 00:17:18.459 "data_offset": 2048, 00:17:18.459 "data_size": 63488 00:17:18.459 }, 00:17:18.459 { 00:17:18.459 "name": null, 00:17:18.459 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:18.459 "is_configured": false, 00:17:18.459 "data_offset": 2048, 00:17:18.459 "data_size": 63488 00:17:18.459 } 00:17:18.459 ] 00:17:18.459 }' 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.459 19:04:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.027 [2024-11-26 19:04:10.245248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:19.027 [2024-11-26 19:04:10.245358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.027 [2024-11-26 19:04:10.245393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:19.027 [2024-11-26 19:04:10.245409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.027 [2024-11-26 19:04:10.246013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.027 [2024-11-26 19:04:10.246038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:19.027 [2024-11-26 19:04:10.246155] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:19.027 [2024-11-26 19:04:10.246189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:19.027 [2024-11-26 19:04:10.246365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:19.027 [2024-11-26 19:04:10.246388] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:19.027 [2024-11-26 19:04:10.246703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:19.027 [2024-11-26 19:04:10.253442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:19.027 [2024-11-26 19:04:10.253471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:19.027 pt4 00:17:19.027 [2024-11-26 19:04:10.253848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.027 "name": "raid_bdev1", 00:17:19.027 "uuid": "d3d56e45-b77c-4415-b344-23197aaf9058", 00:17:19.027 "strip_size_kb": 64, 00:17:19.027 "state": "online", 00:17:19.027 "raid_level": "raid5f", 00:17:19.027 "superblock": true, 00:17:19.027 "num_base_bdevs": 4, 00:17:19.027 "num_base_bdevs_discovered": 3, 00:17:19.027 "num_base_bdevs_operational": 3, 00:17:19.027 "base_bdevs_list": [ 00:17:19.027 { 00:17:19.027 "name": null, 00:17:19.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.027 "is_configured": false, 00:17:19.027 "data_offset": 2048, 00:17:19.027 "data_size": 63488 00:17:19.027 }, 00:17:19.027 { 00:17:19.027 "name": "pt2", 00:17:19.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.027 "is_configured": true, 00:17:19.027 "data_offset": 2048, 00:17:19.027 "data_size": 63488 00:17:19.027 }, 00:17:19.027 { 00:17:19.027 "name": "pt3", 00:17:19.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.027 "is_configured": true, 00:17:19.027 "data_offset": 2048, 00:17:19.027 "data_size": 63488 00:17:19.027 }, 00:17:19.027 { 00:17:19.027 "name": "pt4", 00:17:19.027 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.027 "is_configured": true, 00:17:19.027 "data_offset": 2048, 00:17:19.027 "data_size": 63488 00:17:19.027 } 00:17:19.027 ] 00:17:19.027 }' 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.027 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.595 [2024-11-26 19:04:10.845768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d3d56e45-b77c-4415-b344-23197aaf9058 '!=' d3d56e45-b77c-4415-b344-23197aaf9058 ']' 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84571 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84571 ']' 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84571 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84571 00:17:19.595 killing process with pid 84571 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84571' 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84571 00:17:19.595 [2024-11-26 19:04:10.927213] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.595 [2024-11-26 19:04:10.927317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.595 19:04:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84571 00:17:19.595 [2024-11-26 19:04:10.927429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.595 [2024-11-26 19:04:10.927466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:20.162 [2024-11-26 19:04:11.298835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.098 ************************************ 00:17:21.098 END TEST raid5f_superblock_test 00:17:21.098 ************************************ 00:17:21.098 19:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:21.098 00:17:21.098 real 0m9.676s 00:17:21.098 user 0m15.836s 00:17:21.098 sys 0m1.444s 00:17:21.098 19:04:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.098 19:04:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.098 19:04:12 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:21.098 19:04:12 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:21.098 19:04:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:21.098 19:04:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.098 19:04:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.098 ************************************ 00:17:21.098 START TEST raid5f_rebuild_test 00:17:21.098 ************************************ 00:17:21.098 19:04:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:21.098 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:21.098 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85068 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85068 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85068 ']' 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.099 19:04:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.358 [2024-11-26 19:04:12.553980] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:17:21.358 [2024-11-26 19:04:12.554370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:21.358 Zero copy mechanism will not be used. 00:17:21.358 -allocations --file-prefix=spdk_pid85068 ] 00:17:21.617 [2024-11-26 19:04:12.738917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.617 [2024-11-26 19:04:12.899343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.875 [2024-11-26 19:04:13.151321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.875 [2024-11-26 19:04:13.151584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.442 BaseBdev1_malloc 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.442 [2024-11-26 19:04:13.675323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:22.442 [2024-11-26 19:04:13.675411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.442 [2024-11-26 19:04:13.675458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:22.442 [2024-11-26 19:04:13.675487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.442 [2024-11-26 19:04:13.678706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.442 [2024-11-26 19:04:13.678768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:22.442 BaseBdev1 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.442 BaseBdev2_malloc 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.442 [2024-11-26 19:04:13.729346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:22.442 [2024-11-26 19:04:13.729425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.442 [2024-11-26 19:04:13.729457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:22.442 [2024-11-26 19:04:13.729475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.442 [2024-11-26 19:04:13.732447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.442 [2024-11-26 19:04:13.732639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:22.442 BaseBdev2 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.442 BaseBdev3_malloc 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.442 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.442 [2024-11-26 19:04:13.802980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:22.442 [2024-11-26 19:04:13.803072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.442 [2024-11-26 19:04:13.803106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:22.442 [2024-11-26 19:04:13.803125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.442 [2024-11-26 19:04:13.806274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.442 [2024-11-26 19:04:13.806324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:22.700 BaseBdev3 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.701 BaseBdev4_malloc 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.701 [2024-11-26 19:04:13.861861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:22.701 [2024-11-26 19:04:13.862112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.701 [2024-11-26 19:04:13.862285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:22.701 [2024-11-26 19:04:13.862427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.701 [2024-11-26 19:04:13.865527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.701 [2024-11-26 19:04:13.865705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:22.701 BaseBdev4 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.701 spare_malloc 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.701 spare_delay 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.701 [2024-11-26 19:04:13.933409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:22.701 [2024-11-26 19:04:13.933492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.701 [2024-11-26 19:04:13.933520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:22.701 [2024-11-26 19:04:13.933539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.701 [2024-11-26 19:04:13.936497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.701 [2024-11-26 19:04:13.936669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:22.701 spare 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.701 [2024-11-26 19:04:13.945611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.701 [2024-11-26 19:04:13.948293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.701 [2024-11-26 19:04:13.948381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:22.701 [2024-11-26 19:04:13.948489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:22.701 [2024-11-26 19:04:13.948622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:22.701 [2024-11-26 19:04:13.948642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:22.701 [2024-11-26 19:04:13.949043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:22.701 [2024-11-26 19:04:13.956401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:22.701 [2024-11-26 19:04:13.956604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:22.701 [2024-11-26 19:04:13.957051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.701 19:04:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.701 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.701 "name": "raid_bdev1", 00:17:22.701 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:22.701 "strip_size_kb": 64, 00:17:22.701 "state": "online", 00:17:22.701 "raid_level": "raid5f", 00:17:22.701 "superblock": false, 00:17:22.701 "num_base_bdevs": 4, 00:17:22.701 "num_base_bdevs_discovered": 4, 00:17:22.701 "num_base_bdevs_operational": 4, 00:17:22.701 "base_bdevs_list": [ 00:17:22.701 { 00:17:22.701 "name": "BaseBdev1", 00:17:22.701 "uuid": "e661a45d-e03e-5792-8a37-3f4d781b4615", 00:17:22.701 "is_configured": true, 00:17:22.701 "data_offset": 0, 00:17:22.701 "data_size": 65536 00:17:22.701 }, 00:17:22.701 { 00:17:22.701 "name": "BaseBdev2", 00:17:22.701 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:22.701 "is_configured": true, 00:17:22.701 "data_offset": 0, 00:17:22.701 "data_size": 65536 00:17:22.701 }, 00:17:22.701 { 00:17:22.701 "name": "BaseBdev3", 00:17:22.701 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:22.701 "is_configured": true, 00:17:22.701 "data_offset": 0, 00:17:22.701 "data_size": 65536 00:17:22.701 }, 00:17:22.701 { 00:17:22.701 "name": "BaseBdev4", 00:17:22.701 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:22.701 "is_configured": true, 00:17:22.701 "data_offset": 0, 00:17:22.701 "data_size": 65536 00:17:22.701 } 00:17:22.701 ] 00:17:22.701 }' 00:17:22.701 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.701 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.307 [2024-11-26 19:04:14.497318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.307 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:23.566 [2024-11-26 19:04:14.925308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:23.824 /dev/nbd0 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.824 1+0 records in 00:17:23.824 1+0 records out 00:17:23.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354764 s, 11.5 MB/s 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:23.824 19:04:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:24.390 512+0 records in 00:17:24.390 512+0 records out 00:17:24.390 100663296 bytes (101 MB, 96 MiB) copied, 0.673379 s, 149 MB/s 00:17:24.390 19:04:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:24.390 19:04:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.390 19:04:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:24.390 19:04:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.390 19:04:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:24.390 19:04:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.390 19:04:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.648 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.648 [2024-11-26 19:04:16.010469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.648 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.648 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.648 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.907 [2024-11-26 19:04:16.026288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.907 "name": "raid_bdev1", 00:17:24.907 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:24.907 "strip_size_kb": 64, 00:17:24.907 "state": "online", 00:17:24.907 "raid_level": "raid5f", 00:17:24.907 "superblock": false, 00:17:24.907 "num_base_bdevs": 4, 00:17:24.907 "num_base_bdevs_discovered": 3, 00:17:24.907 "num_base_bdevs_operational": 3, 00:17:24.907 "base_bdevs_list": [ 00:17:24.907 { 00:17:24.907 "name": null, 00:17:24.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.907 "is_configured": false, 00:17:24.907 "data_offset": 0, 00:17:24.907 "data_size": 65536 00:17:24.907 }, 00:17:24.907 { 00:17:24.907 "name": "BaseBdev2", 00:17:24.907 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:24.907 "is_configured": true, 00:17:24.907 "data_offset": 0, 00:17:24.907 "data_size": 65536 00:17:24.907 }, 00:17:24.907 { 00:17:24.907 "name": "BaseBdev3", 00:17:24.907 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:24.907 "is_configured": true, 00:17:24.907 "data_offset": 0, 00:17:24.907 "data_size": 65536 00:17:24.907 }, 00:17:24.907 { 00:17:24.907 "name": "BaseBdev4", 00:17:24.907 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:24.907 "is_configured": true, 00:17:24.907 "data_offset": 0, 00:17:24.907 "data_size": 65536 00:17:24.907 } 00:17:24.907 ] 00:17:24.907 }' 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.907 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.475 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.475 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.475 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.475 [2024-11-26 19:04:16.550487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.475 [2024-11-26 19:04:16.564542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:25.475 19:04:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.475 19:04:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:25.475 [2024-11-26 19:04:16.573735] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.412 "name": "raid_bdev1", 00:17:26.412 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:26.412 "strip_size_kb": 64, 00:17:26.412 "state": "online", 00:17:26.412 "raid_level": "raid5f", 00:17:26.412 "superblock": false, 00:17:26.412 "num_base_bdevs": 4, 00:17:26.412 "num_base_bdevs_discovered": 4, 00:17:26.412 "num_base_bdevs_operational": 4, 00:17:26.412 "process": { 00:17:26.412 "type": "rebuild", 00:17:26.412 "target": "spare", 00:17:26.412 "progress": { 00:17:26.412 "blocks": 17280, 00:17:26.412 "percent": 8 00:17:26.412 } 00:17:26.412 }, 00:17:26.412 "base_bdevs_list": [ 00:17:26.412 { 00:17:26.412 "name": "spare", 00:17:26.412 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:26.412 "is_configured": true, 00:17:26.412 "data_offset": 0, 00:17:26.412 "data_size": 65536 00:17:26.412 }, 00:17:26.412 { 00:17:26.412 "name": "BaseBdev2", 00:17:26.412 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:26.412 "is_configured": true, 00:17:26.412 "data_offset": 0, 00:17:26.412 "data_size": 65536 00:17:26.412 }, 00:17:26.412 { 00:17:26.412 "name": "BaseBdev3", 00:17:26.412 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:26.412 "is_configured": true, 00:17:26.412 "data_offset": 0, 00:17:26.412 "data_size": 65536 00:17:26.412 }, 00:17:26.412 { 00:17:26.412 "name": "BaseBdev4", 00:17:26.412 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:26.412 "is_configured": true, 00:17:26.412 "data_offset": 0, 00:17:26.412 "data_size": 65536 00:17:26.412 } 00:17:26.412 ] 00:17:26.412 }' 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.412 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.412 [2024-11-26 19:04:17.730958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.671 [2024-11-26 19:04:17.786422] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.671 [2024-11-26 19:04:17.786568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.671 [2024-11-26 19:04:17.786598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.671 [2024-11-26 19:04:17.786620] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.671 "name": "raid_bdev1", 00:17:26.671 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:26.671 "strip_size_kb": 64, 00:17:26.671 "state": "online", 00:17:26.671 "raid_level": "raid5f", 00:17:26.671 "superblock": false, 00:17:26.671 "num_base_bdevs": 4, 00:17:26.671 "num_base_bdevs_discovered": 3, 00:17:26.671 "num_base_bdevs_operational": 3, 00:17:26.671 "base_bdevs_list": [ 00:17:26.671 { 00:17:26.671 "name": null, 00:17:26.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.671 "is_configured": false, 00:17:26.671 "data_offset": 0, 00:17:26.671 "data_size": 65536 00:17:26.671 }, 00:17:26.671 { 00:17:26.671 "name": "BaseBdev2", 00:17:26.671 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:26.671 "is_configured": true, 00:17:26.671 "data_offset": 0, 00:17:26.671 "data_size": 65536 00:17:26.671 }, 00:17:26.671 { 00:17:26.671 "name": "BaseBdev3", 00:17:26.671 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:26.671 "is_configured": true, 00:17:26.671 "data_offset": 0, 00:17:26.671 "data_size": 65536 00:17:26.671 }, 00:17:26.671 { 00:17:26.671 "name": "BaseBdev4", 00:17:26.671 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:26.671 "is_configured": true, 00:17:26.671 "data_offset": 0, 00:17:26.671 "data_size": 65536 00:17:26.671 } 00:17:26.671 ] 00:17:26.671 }' 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.671 19:04:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.241 "name": "raid_bdev1", 00:17:27.241 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:27.241 "strip_size_kb": 64, 00:17:27.241 "state": "online", 00:17:27.241 "raid_level": "raid5f", 00:17:27.241 "superblock": false, 00:17:27.241 "num_base_bdevs": 4, 00:17:27.241 "num_base_bdevs_discovered": 3, 00:17:27.241 "num_base_bdevs_operational": 3, 00:17:27.241 "base_bdevs_list": [ 00:17:27.241 { 00:17:27.241 "name": null, 00:17:27.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.241 "is_configured": false, 00:17:27.241 "data_offset": 0, 00:17:27.241 "data_size": 65536 00:17:27.241 }, 00:17:27.241 { 00:17:27.241 "name": "BaseBdev2", 00:17:27.241 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:27.241 "is_configured": true, 00:17:27.241 "data_offset": 0, 00:17:27.241 "data_size": 65536 00:17:27.241 }, 00:17:27.241 { 00:17:27.241 "name": "BaseBdev3", 00:17:27.241 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:27.241 "is_configured": true, 00:17:27.241 "data_offset": 0, 00:17:27.241 "data_size": 65536 00:17:27.241 }, 00:17:27.241 { 00:17:27.241 "name": "BaseBdev4", 00:17:27.241 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:27.241 "is_configured": true, 00:17:27.241 "data_offset": 0, 00:17:27.241 "data_size": 65536 00:17:27.241 } 00:17:27.241 ] 00:17:27.241 }' 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.241 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.242 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.242 19:04:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.242 19:04:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.242 [2024-11-26 19:04:18.495335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.242 [2024-11-26 19:04:18.509593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:27.242 19:04:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.242 19:04:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:27.242 [2024-11-26 19:04:18.518996] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.177 19:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.436 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.436 "name": "raid_bdev1", 00:17:28.436 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:28.436 "strip_size_kb": 64, 00:17:28.436 "state": "online", 00:17:28.436 "raid_level": "raid5f", 00:17:28.436 "superblock": false, 00:17:28.436 "num_base_bdevs": 4, 00:17:28.436 "num_base_bdevs_discovered": 4, 00:17:28.436 "num_base_bdevs_operational": 4, 00:17:28.436 "process": { 00:17:28.436 "type": "rebuild", 00:17:28.436 "target": "spare", 00:17:28.436 "progress": { 00:17:28.436 "blocks": 17280, 00:17:28.436 "percent": 8 00:17:28.436 } 00:17:28.436 }, 00:17:28.436 "base_bdevs_list": [ 00:17:28.436 { 00:17:28.436 "name": "spare", 00:17:28.436 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:28.436 "is_configured": true, 00:17:28.436 "data_offset": 0, 00:17:28.436 "data_size": 65536 00:17:28.436 }, 00:17:28.436 { 00:17:28.436 "name": "BaseBdev2", 00:17:28.436 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:28.436 "is_configured": true, 00:17:28.436 "data_offset": 0, 00:17:28.436 "data_size": 65536 00:17:28.436 }, 00:17:28.436 { 00:17:28.436 "name": "BaseBdev3", 00:17:28.436 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:28.436 "is_configured": true, 00:17:28.436 "data_offset": 0, 00:17:28.436 "data_size": 65536 00:17:28.436 }, 00:17:28.437 { 00:17:28.437 "name": "BaseBdev4", 00:17:28.437 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:28.437 "is_configured": true, 00:17:28.437 "data_offset": 0, 00:17:28.437 "data_size": 65536 00:17:28.437 } 00:17:28.437 ] 00:17:28.437 }' 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=678 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.437 "name": "raid_bdev1", 00:17:28.437 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:28.437 "strip_size_kb": 64, 00:17:28.437 "state": "online", 00:17:28.437 "raid_level": "raid5f", 00:17:28.437 "superblock": false, 00:17:28.437 "num_base_bdevs": 4, 00:17:28.437 "num_base_bdevs_discovered": 4, 00:17:28.437 "num_base_bdevs_operational": 4, 00:17:28.437 "process": { 00:17:28.437 "type": "rebuild", 00:17:28.437 "target": "spare", 00:17:28.437 "progress": { 00:17:28.437 "blocks": 21120, 00:17:28.437 "percent": 10 00:17:28.437 } 00:17:28.437 }, 00:17:28.437 "base_bdevs_list": [ 00:17:28.437 { 00:17:28.437 "name": "spare", 00:17:28.437 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:28.437 "is_configured": true, 00:17:28.437 "data_offset": 0, 00:17:28.437 "data_size": 65536 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "name": "BaseBdev2", 00:17:28.437 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:28.437 "is_configured": true, 00:17:28.437 "data_offset": 0, 00:17:28.437 "data_size": 65536 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "name": "BaseBdev3", 00:17:28.437 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:28.437 "is_configured": true, 00:17:28.437 "data_offset": 0, 00:17:28.437 "data_size": 65536 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "name": "BaseBdev4", 00:17:28.437 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:28.437 "is_configured": true, 00:17:28.437 "data_offset": 0, 00:17:28.437 "data_size": 65536 00:17:28.437 } 00:17:28.437 ] 00:17:28.437 }' 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.437 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.696 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.697 19:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.634 "name": "raid_bdev1", 00:17:29.634 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:29.634 "strip_size_kb": 64, 00:17:29.634 "state": "online", 00:17:29.634 "raid_level": "raid5f", 00:17:29.634 "superblock": false, 00:17:29.634 "num_base_bdevs": 4, 00:17:29.634 "num_base_bdevs_discovered": 4, 00:17:29.634 "num_base_bdevs_operational": 4, 00:17:29.634 "process": { 00:17:29.634 "type": "rebuild", 00:17:29.634 "target": "spare", 00:17:29.634 "progress": { 00:17:29.634 "blocks": 44160, 00:17:29.634 "percent": 22 00:17:29.634 } 00:17:29.634 }, 00:17:29.634 "base_bdevs_list": [ 00:17:29.634 { 00:17:29.634 "name": "spare", 00:17:29.634 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:29.634 "is_configured": true, 00:17:29.634 "data_offset": 0, 00:17:29.634 "data_size": 65536 00:17:29.634 }, 00:17:29.634 { 00:17:29.634 "name": "BaseBdev2", 00:17:29.634 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:29.634 "is_configured": true, 00:17:29.634 "data_offset": 0, 00:17:29.634 "data_size": 65536 00:17:29.634 }, 00:17:29.634 { 00:17:29.634 "name": "BaseBdev3", 00:17:29.634 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:29.634 "is_configured": true, 00:17:29.634 "data_offset": 0, 00:17:29.634 "data_size": 65536 00:17:29.634 }, 00:17:29.634 { 00:17:29.634 "name": "BaseBdev4", 00:17:29.634 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:29.634 "is_configured": true, 00:17:29.634 "data_offset": 0, 00:17:29.634 "data_size": 65536 00:17:29.634 } 00:17:29.634 ] 00:17:29.634 }' 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.634 19:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.011 19:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.011 19:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.011 19:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.011 19:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.011 19:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.011 19:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.011 19:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.011 19:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.011 19:04:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.011 19:04:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.011 19:04:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.011 19:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.011 "name": "raid_bdev1", 00:17:31.011 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:31.011 "strip_size_kb": 64, 00:17:31.011 "state": "online", 00:17:31.011 "raid_level": "raid5f", 00:17:31.011 "superblock": false, 00:17:31.011 "num_base_bdevs": 4, 00:17:31.011 "num_base_bdevs_discovered": 4, 00:17:31.011 "num_base_bdevs_operational": 4, 00:17:31.011 "process": { 00:17:31.011 "type": "rebuild", 00:17:31.011 "target": "spare", 00:17:31.011 "progress": { 00:17:31.011 "blocks": 65280, 00:17:31.011 "percent": 33 00:17:31.011 } 00:17:31.011 }, 00:17:31.011 "base_bdevs_list": [ 00:17:31.011 { 00:17:31.011 "name": "spare", 00:17:31.011 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:31.011 "is_configured": true, 00:17:31.011 "data_offset": 0, 00:17:31.011 "data_size": 65536 00:17:31.011 }, 00:17:31.011 { 00:17:31.011 "name": "BaseBdev2", 00:17:31.012 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:31.012 "is_configured": true, 00:17:31.012 "data_offset": 0, 00:17:31.012 "data_size": 65536 00:17:31.012 }, 00:17:31.012 { 00:17:31.012 "name": "BaseBdev3", 00:17:31.012 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:31.012 "is_configured": true, 00:17:31.012 "data_offset": 0, 00:17:31.012 "data_size": 65536 00:17:31.012 }, 00:17:31.012 { 00:17:31.012 "name": "BaseBdev4", 00:17:31.012 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:31.012 "is_configured": true, 00:17:31.012 "data_offset": 0, 00:17:31.012 "data_size": 65536 00:17:31.012 } 00:17:31.012 ] 00:17:31.012 }' 00:17:31.012 19:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.012 19:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.012 19:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.012 19:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.012 19:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.950 "name": "raid_bdev1", 00:17:31.950 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:31.950 "strip_size_kb": 64, 00:17:31.950 "state": "online", 00:17:31.950 "raid_level": "raid5f", 00:17:31.950 "superblock": false, 00:17:31.950 "num_base_bdevs": 4, 00:17:31.950 "num_base_bdevs_discovered": 4, 00:17:31.950 "num_base_bdevs_operational": 4, 00:17:31.950 "process": { 00:17:31.950 "type": "rebuild", 00:17:31.950 "target": "spare", 00:17:31.950 "progress": { 00:17:31.950 "blocks": 88320, 00:17:31.950 "percent": 44 00:17:31.950 } 00:17:31.950 }, 00:17:31.950 "base_bdevs_list": [ 00:17:31.950 { 00:17:31.950 "name": "spare", 00:17:31.950 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:31.950 "is_configured": true, 00:17:31.950 "data_offset": 0, 00:17:31.950 "data_size": 65536 00:17:31.950 }, 00:17:31.950 { 00:17:31.950 "name": "BaseBdev2", 00:17:31.950 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:31.950 "is_configured": true, 00:17:31.950 "data_offset": 0, 00:17:31.950 "data_size": 65536 00:17:31.950 }, 00:17:31.950 { 00:17:31.950 "name": "BaseBdev3", 00:17:31.950 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:31.950 "is_configured": true, 00:17:31.950 "data_offset": 0, 00:17:31.950 "data_size": 65536 00:17:31.950 }, 00:17:31.950 { 00:17:31.950 "name": "BaseBdev4", 00:17:31.950 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:31.950 "is_configured": true, 00:17:31.950 "data_offset": 0, 00:17:31.950 "data_size": 65536 00:17:31.950 } 00:17:31.950 ] 00:17:31.950 }' 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.950 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.210 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.210 19:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.204 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.204 "name": "raid_bdev1", 00:17:33.204 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:33.204 "strip_size_kb": 64, 00:17:33.204 "state": "online", 00:17:33.204 "raid_level": "raid5f", 00:17:33.204 "superblock": false, 00:17:33.204 "num_base_bdevs": 4, 00:17:33.204 "num_base_bdevs_discovered": 4, 00:17:33.204 "num_base_bdevs_operational": 4, 00:17:33.204 "process": { 00:17:33.204 "type": "rebuild", 00:17:33.204 "target": "spare", 00:17:33.205 "progress": { 00:17:33.205 "blocks": 109440, 00:17:33.205 "percent": 55 00:17:33.205 } 00:17:33.205 }, 00:17:33.205 "base_bdevs_list": [ 00:17:33.205 { 00:17:33.205 "name": "spare", 00:17:33.205 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:33.205 "is_configured": true, 00:17:33.205 "data_offset": 0, 00:17:33.205 "data_size": 65536 00:17:33.205 }, 00:17:33.205 { 00:17:33.205 "name": "BaseBdev2", 00:17:33.205 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:33.205 "is_configured": true, 00:17:33.205 "data_offset": 0, 00:17:33.205 "data_size": 65536 00:17:33.205 }, 00:17:33.205 { 00:17:33.205 "name": "BaseBdev3", 00:17:33.205 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:33.205 "is_configured": true, 00:17:33.205 "data_offset": 0, 00:17:33.205 "data_size": 65536 00:17:33.205 }, 00:17:33.205 { 00:17:33.205 "name": "BaseBdev4", 00:17:33.205 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:33.205 "is_configured": true, 00:17:33.205 "data_offset": 0, 00:17:33.205 "data_size": 65536 00:17:33.205 } 00:17:33.205 ] 00:17:33.205 }' 00:17:33.205 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.205 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.205 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.205 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.205 19:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.142 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.142 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.142 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.142 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.142 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.142 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.402 "name": "raid_bdev1", 00:17:34.402 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:34.402 "strip_size_kb": 64, 00:17:34.402 "state": "online", 00:17:34.402 "raid_level": "raid5f", 00:17:34.402 "superblock": false, 00:17:34.402 "num_base_bdevs": 4, 00:17:34.402 "num_base_bdevs_discovered": 4, 00:17:34.402 "num_base_bdevs_operational": 4, 00:17:34.402 "process": { 00:17:34.402 "type": "rebuild", 00:17:34.402 "target": "spare", 00:17:34.402 "progress": { 00:17:34.402 "blocks": 132480, 00:17:34.402 "percent": 67 00:17:34.402 } 00:17:34.402 }, 00:17:34.402 "base_bdevs_list": [ 00:17:34.402 { 00:17:34.402 "name": "spare", 00:17:34.402 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:34.402 "is_configured": true, 00:17:34.402 "data_offset": 0, 00:17:34.402 "data_size": 65536 00:17:34.402 }, 00:17:34.402 { 00:17:34.402 "name": "BaseBdev2", 00:17:34.402 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:34.402 "is_configured": true, 00:17:34.402 "data_offset": 0, 00:17:34.402 "data_size": 65536 00:17:34.402 }, 00:17:34.402 { 00:17:34.402 "name": "BaseBdev3", 00:17:34.402 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:34.402 "is_configured": true, 00:17:34.402 "data_offset": 0, 00:17:34.402 "data_size": 65536 00:17:34.402 }, 00:17:34.402 { 00:17:34.402 "name": "BaseBdev4", 00:17:34.402 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:34.402 "is_configured": true, 00:17:34.402 "data_offset": 0, 00:17:34.402 "data_size": 65536 00:17:34.402 } 00:17:34.402 ] 00:17:34.402 }' 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.402 19:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.339 19:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.598 19:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.598 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.598 "name": "raid_bdev1", 00:17:35.598 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:35.598 "strip_size_kb": 64, 00:17:35.599 "state": "online", 00:17:35.599 "raid_level": "raid5f", 00:17:35.599 "superblock": false, 00:17:35.599 "num_base_bdevs": 4, 00:17:35.599 "num_base_bdevs_discovered": 4, 00:17:35.599 "num_base_bdevs_operational": 4, 00:17:35.599 "process": { 00:17:35.599 "type": "rebuild", 00:17:35.599 "target": "spare", 00:17:35.599 "progress": { 00:17:35.599 "blocks": 153600, 00:17:35.599 "percent": 78 00:17:35.599 } 00:17:35.599 }, 00:17:35.599 "base_bdevs_list": [ 00:17:35.599 { 00:17:35.599 "name": "spare", 00:17:35.599 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:35.599 "is_configured": true, 00:17:35.599 "data_offset": 0, 00:17:35.599 "data_size": 65536 00:17:35.599 }, 00:17:35.599 { 00:17:35.599 "name": "BaseBdev2", 00:17:35.599 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:35.599 "is_configured": true, 00:17:35.599 "data_offset": 0, 00:17:35.599 "data_size": 65536 00:17:35.599 }, 00:17:35.599 { 00:17:35.599 "name": "BaseBdev3", 00:17:35.599 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:35.599 "is_configured": true, 00:17:35.599 "data_offset": 0, 00:17:35.599 "data_size": 65536 00:17:35.599 }, 00:17:35.599 { 00:17:35.599 "name": "BaseBdev4", 00:17:35.599 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:35.599 "is_configured": true, 00:17:35.599 "data_offset": 0, 00:17:35.599 "data_size": 65536 00:17:35.599 } 00:17:35.599 ] 00:17:35.599 }' 00:17:35.599 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.599 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.599 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.599 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.599 19:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.536 19:04:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.795 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.795 "name": "raid_bdev1", 00:17:36.795 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:36.795 "strip_size_kb": 64, 00:17:36.795 "state": "online", 00:17:36.795 "raid_level": "raid5f", 00:17:36.795 "superblock": false, 00:17:36.795 "num_base_bdevs": 4, 00:17:36.795 "num_base_bdevs_discovered": 4, 00:17:36.795 "num_base_bdevs_operational": 4, 00:17:36.795 "process": { 00:17:36.795 "type": "rebuild", 00:17:36.795 "target": "spare", 00:17:36.795 "progress": { 00:17:36.795 "blocks": 176640, 00:17:36.795 "percent": 89 00:17:36.795 } 00:17:36.795 }, 00:17:36.795 "base_bdevs_list": [ 00:17:36.795 { 00:17:36.795 "name": "spare", 00:17:36.795 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:36.795 "is_configured": true, 00:17:36.795 "data_offset": 0, 00:17:36.795 "data_size": 65536 00:17:36.795 }, 00:17:36.795 { 00:17:36.795 "name": "BaseBdev2", 00:17:36.795 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:36.795 "is_configured": true, 00:17:36.795 "data_offset": 0, 00:17:36.795 "data_size": 65536 00:17:36.795 }, 00:17:36.795 { 00:17:36.795 "name": "BaseBdev3", 00:17:36.795 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:36.795 "is_configured": true, 00:17:36.795 "data_offset": 0, 00:17:36.795 "data_size": 65536 00:17:36.795 }, 00:17:36.795 { 00:17:36.795 "name": "BaseBdev4", 00:17:36.796 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:36.796 "is_configured": true, 00:17:36.796 "data_offset": 0, 00:17:36.796 "data_size": 65536 00:17:36.796 } 00:17:36.796 ] 00:17:36.796 }' 00:17:36.796 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.796 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.796 19:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.796 19:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.796 19:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.750 [2024-11-26 19:04:28.931703] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:37.750 [2024-11-26 19:04:28.931822] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:37.750 [2024-11-26 19:04:28.931910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.750 "name": "raid_bdev1", 00:17:37.750 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:37.750 "strip_size_kb": 64, 00:17:37.750 "state": "online", 00:17:37.750 "raid_level": "raid5f", 00:17:37.750 "superblock": false, 00:17:37.750 "num_base_bdevs": 4, 00:17:37.750 "num_base_bdevs_discovered": 4, 00:17:37.750 "num_base_bdevs_operational": 4, 00:17:37.750 "base_bdevs_list": [ 00:17:37.750 { 00:17:37.750 "name": "spare", 00:17:37.750 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:37.750 "is_configured": true, 00:17:37.750 "data_offset": 0, 00:17:37.750 "data_size": 65536 00:17:37.750 }, 00:17:37.750 { 00:17:37.750 "name": "BaseBdev2", 00:17:37.750 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:37.750 "is_configured": true, 00:17:37.750 "data_offset": 0, 00:17:37.750 "data_size": 65536 00:17:37.750 }, 00:17:37.750 { 00:17:37.750 "name": "BaseBdev3", 00:17:37.750 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:37.750 "is_configured": true, 00:17:37.750 "data_offset": 0, 00:17:37.750 "data_size": 65536 00:17:37.750 }, 00:17:37.750 { 00:17:37.750 "name": "BaseBdev4", 00:17:37.750 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:37.750 "is_configured": true, 00:17:37.750 "data_offset": 0, 00:17:37.750 "data_size": 65536 00:17:37.750 } 00:17:37.750 ] 00:17:37.750 }' 00:17:37.750 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.015 "name": "raid_bdev1", 00:17:38.015 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:38.015 "strip_size_kb": 64, 00:17:38.015 "state": "online", 00:17:38.015 "raid_level": "raid5f", 00:17:38.015 "superblock": false, 00:17:38.015 "num_base_bdevs": 4, 00:17:38.015 "num_base_bdevs_discovered": 4, 00:17:38.015 "num_base_bdevs_operational": 4, 00:17:38.015 "base_bdevs_list": [ 00:17:38.015 { 00:17:38.015 "name": "spare", 00:17:38.015 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:38.015 "is_configured": true, 00:17:38.015 "data_offset": 0, 00:17:38.015 "data_size": 65536 00:17:38.015 }, 00:17:38.015 { 00:17:38.015 "name": "BaseBdev2", 00:17:38.015 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:38.015 "is_configured": true, 00:17:38.015 "data_offset": 0, 00:17:38.015 "data_size": 65536 00:17:38.015 }, 00:17:38.015 { 00:17:38.015 "name": "BaseBdev3", 00:17:38.015 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:38.015 "is_configured": true, 00:17:38.015 "data_offset": 0, 00:17:38.015 "data_size": 65536 00:17:38.015 }, 00:17:38.015 { 00:17:38.015 "name": "BaseBdev4", 00:17:38.015 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:38.015 "is_configured": true, 00:17:38.015 "data_offset": 0, 00:17:38.015 "data_size": 65536 00:17:38.015 } 00:17:38.015 ] 00:17:38.015 }' 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.015 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.273 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.273 "name": "raid_bdev1", 00:17:38.273 "uuid": "4bb380cc-5930-46e4-9283-398bb5fbda8b", 00:17:38.273 "strip_size_kb": 64, 00:17:38.273 "state": "online", 00:17:38.273 "raid_level": "raid5f", 00:17:38.273 "superblock": false, 00:17:38.273 "num_base_bdevs": 4, 00:17:38.273 "num_base_bdevs_discovered": 4, 00:17:38.273 "num_base_bdevs_operational": 4, 00:17:38.273 "base_bdevs_list": [ 00:17:38.273 { 00:17:38.273 "name": "spare", 00:17:38.273 "uuid": "d04877ae-40bb-5b1a-a728-d9d7d9fe7a40", 00:17:38.273 "is_configured": true, 00:17:38.273 "data_offset": 0, 00:17:38.273 "data_size": 65536 00:17:38.273 }, 00:17:38.273 { 00:17:38.273 "name": "BaseBdev2", 00:17:38.273 "uuid": "0981fa0f-cca8-5cb0-ad24-e353a4fb17d1", 00:17:38.274 "is_configured": true, 00:17:38.274 "data_offset": 0, 00:17:38.274 "data_size": 65536 00:17:38.274 }, 00:17:38.274 { 00:17:38.274 "name": "BaseBdev3", 00:17:38.274 "uuid": "33dfee12-038a-517d-a853-a9a1d7aa3b46", 00:17:38.274 "is_configured": true, 00:17:38.274 "data_offset": 0, 00:17:38.274 "data_size": 65536 00:17:38.274 }, 00:17:38.274 { 00:17:38.274 "name": "BaseBdev4", 00:17:38.274 "uuid": "972c871d-de0d-5dec-a262-93a785070f0e", 00:17:38.274 "is_configured": true, 00:17:38.274 "data_offset": 0, 00:17:38.274 "data_size": 65536 00:17:38.274 } 00:17:38.274 ] 00:17:38.274 }' 00:17:38.274 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.274 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.533 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:38.533 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.533 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.533 [2024-11-26 19:04:29.864867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.533 [2024-11-26 19:04:29.865107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.533 [2024-11-26 19:04:29.865248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.533 [2024-11-26 19:04:29.865373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.533 [2024-11-26 19:04:29.865407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:38.533 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.533 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.533 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.533 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.533 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:38.533 19:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.791 19:04:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:39.049 /dev/nbd0 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.049 1+0 records in 00:17:39.049 1+0 records out 00:17:39.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260153 s, 15.7 MB/s 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.049 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:39.308 /dev/nbd1 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:39.308 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.308 1+0 records in 00:17:39.308 1+0 records out 00:17:39.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385304 s, 10.6 MB/s 00:17:39.565 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.565 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.566 19:04:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.823 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85068 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85068 ']' 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85068 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85068 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.081 killing process with pid 85068 00:17:40.081 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85068' 00:17:40.081 Received shutdown signal, test time was about 60.000000 seconds 00:17:40.081 00:17:40.082 Latency(us) 00:17:40.082 [2024-11-26T19:04:31.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.082 [2024-11-26T19:04:31.449Z] =================================================================================================================== 00:17:40.082 [2024-11-26T19:04:31.449Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.082 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85068 00:17:40.082 19:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85068 00:17:40.082 [2024-11-26 19:04:31.444136] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.651 [2024-11-26 19:04:31.885918] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.586 ************************************ 00:17:41.586 END TEST raid5f_rebuild_test 00:17:41.586 ************************************ 00:17:41.586 19:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:41.586 00:17:41.586 real 0m20.469s 00:17:41.586 user 0m25.592s 00:17:41.586 sys 0m2.379s 00:17:41.586 19:04:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.586 19:04:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.845 19:04:32 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:41.845 19:04:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:41.845 19:04:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.845 19:04:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.845 ************************************ 00:17:41.845 START TEST raid5f_rebuild_test_sb 00:17:41.845 ************************************ 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85578 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85578 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85578 ']' 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.845 19:04:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.845 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:41.845 Zero copy mechanism will not be used. 00:17:41.845 [2024-11-26 19:04:33.087563] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:17:41.845 [2024-11-26 19:04:33.087741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85578 ] 00:17:42.104 [2024-11-26 19:04:33.261991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.104 [2024-11-26 19:04:33.390467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.362 [2024-11-26 19:04:33.590775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.362 [2024-11-26 19:04:33.590817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.931 BaseBdev1_malloc 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.931 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.931 [2024-11-26 19:04:34.163796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:42.932 [2024-11-26 19:04:34.163911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.932 [2024-11-26 19:04:34.163965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:42.932 [2024-11-26 19:04:34.163991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.932 [2024-11-26 19:04:34.166926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.932 [2024-11-26 19:04:34.166971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:42.932 BaseBdev1 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.932 BaseBdev2_malloc 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.932 [2024-11-26 19:04:34.219483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:42.932 [2024-11-26 19:04:34.219575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.932 [2024-11-26 19:04:34.219608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:42.932 [2024-11-26 19:04:34.219626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.932 [2024-11-26 19:04:34.222513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.932 [2024-11-26 19:04:34.222559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:42.932 BaseBdev2 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.932 BaseBdev3_malloc 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.932 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.932 [2024-11-26 19:04:34.291561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:42.932 [2024-11-26 19:04:34.291634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.932 [2024-11-26 19:04:34.291666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:42.932 [2024-11-26 19:04:34.291685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.932 [2024-11-26 19:04:34.294537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.932 [2024-11-26 19:04:34.294588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:43.204 BaseBdev3 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.204 BaseBdev4_malloc 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.204 [2024-11-26 19:04:34.343489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:43.204 [2024-11-26 19:04:34.343562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.204 [2024-11-26 19:04:34.343592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:43.204 [2024-11-26 19:04:34.343610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.204 [2024-11-26 19:04:34.346398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.204 [2024-11-26 19:04:34.346462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:43.204 BaseBdev4 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.204 spare_malloc 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.204 spare_delay 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.204 [2024-11-26 19:04:34.407091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:43.204 [2024-11-26 19:04:34.407161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.204 [2024-11-26 19:04:34.407189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:43.204 [2024-11-26 19:04:34.407208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.204 [2024-11-26 19:04:34.410014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.204 [2024-11-26 19:04:34.410186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:43.204 spare 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.204 [2024-11-26 19:04:34.419209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.204 [2024-11-26 19:04:34.421748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.204 [2024-11-26 19:04:34.421834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:43.204 [2024-11-26 19:04:34.421944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:43.204 [2024-11-26 19:04:34.422261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:43.204 [2024-11-26 19:04:34.422291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:43.204 [2024-11-26 19:04:34.422658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:43.204 [2024-11-26 19:04:34.429406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:43.204 [2024-11-26 19:04:34.429558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:43.204 [2024-11-26 19:04:34.430031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.204 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.205 "name": "raid_bdev1", 00:17:43.205 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:43.205 "strip_size_kb": 64, 00:17:43.205 "state": "online", 00:17:43.205 "raid_level": "raid5f", 00:17:43.205 "superblock": true, 00:17:43.205 "num_base_bdevs": 4, 00:17:43.205 "num_base_bdevs_discovered": 4, 00:17:43.205 "num_base_bdevs_operational": 4, 00:17:43.205 "base_bdevs_list": [ 00:17:43.205 { 00:17:43.205 "name": "BaseBdev1", 00:17:43.205 "uuid": "683c7718-7b72-5286-9a0d-57d77c9acae7", 00:17:43.205 "is_configured": true, 00:17:43.205 "data_offset": 2048, 00:17:43.205 "data_size": 63488 00:17:43.205 }, 00:17:43.205 { 00:17:43.205 "name": "BaseBdev2", 00:17:43.205 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:43.205 "is_configured": true, 00:17:43.205 "data_offset": 2048, 00:17:43.205 "data_size": 63488 00:17:43.205 }, 00:17:43.205 { 00:17:43.205 "name": "BaseBdev3", 00:17:43.205 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:43.205 "is_configured": true, 00:17:43.205 "data_offset": 2048, 00:17:43.205 "data_size": 63488 00:17:43.205 }, 00:17:43.205 { 00:17:43.205 "name": "BaseBdev4", 00:17:43.205 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:43.205 "is_configured": true, 00:17:43.205 "data_offset": 2048, 00:17:43.205 "data_size": 63488 00:17:43.205 } 00:17:43.205 ] 00:17:43.205 }' 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.205 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.771 [2024-11-26 19:04:34.930335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:43.771 19:04:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:43.771 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:44.030 [2024-11-26 19:04:35.270199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:44.030 /dev/nbd0 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.030 1+0 records in 00:17:44.030 1+0 records out 00:17:44.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314617 s, 13.0 MB/s 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:44.030 19:04:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:44.965 496+0 records in 00:17:44.965 496+0 records out 00:17:44.965 97517568 bytes (98 MB, 93 MiB) copied, 0.662313 s, 147 MB/s 00:17:44.965 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:44.965 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:44.965 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:44.965 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:44.965 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:44.965 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.965 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:45.223 [2024-11-26 19:04:36.336488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.223 [2024-11-26 19:04:36.352248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.223 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.223 "name": "raid_bdev1", 00:17:45.223 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:45.223 "strip_size_kb": 64, 00:17:45.223 "state": "online", 00:17:45.223 "raid_level": "raid5f", 00:17:45.223 "superblock": true, 00:17:45.223 "num_base_bdevs": 4, 00:17:45.223 "num_base_bdevs_discovered": 3, 00:17:45.223 "num_base_bdevs_operational": 3, 00:17:45.223 "base_bdevs_list": [ 00:17:45.223 { 00:17:45.223 "name": null, 00:17:45.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.223 "is_configured": false, 00:17:45.223 "data_offset": 0, 00:17:45.223 "data_size": 63488 00:17:45.223 }, 00:17:45.223 { 00:17:45.223 "name": "BaseBdev2", 00:17:45.224 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:45.224 "is_configured": true, 00:17:45.224 "data_offset": 2048, 00:17:45.224 "data_size": 63488 00:17:45.224 }, 00:17:45.224 { 00:17:45.224 "name": "BaseBdev3", 00:17:45.224 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:45.224 "is_configured": true, 00:17:45.224 "data_offset": 2048, 00:17:45.224 "data_size": 63488 00:17:45.224 }, 00:17:45.224 { 00:17:45.224 "name": "BaseBdev4", 00:17:45.224 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:45.224 "is_configured": true, 00:17:45.224 "data_offset": 2048, 00:17:45.224 "data_size": 63488 00:17:45.224 } 00:17:45.224 ] 00:17:45.224 }' 00:17:45.224 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.224 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.792 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 [2024-11-26 19:04:36.884404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.792 [2024-11-26 19:04:36.899181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:45.792 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 19:04:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:45.792 [2024-11-26 19:04:36.908504] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.727 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.727 "name": "raid_bdev1", 00:17:46.727 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:46.727 "strip_size_kb": 64, 00:17:46.727 "state": "online", 00:17:46.727 "raid_level": "raid5f", 00:17:46.727 "superblock": true, 00:17:46.727 "num_base_bdevs": 4, 00:17:46.727 "num_base_bdevs_discovered": 4, 00:17:46.727 "num_base_bdevs_operational": 4, 00:17:46.727 "process": { 00:17:46.727 "type": "rebuild", 00:17:46.727 "target": "spare", 00:17:46.727 "progress": { 00:17:46.727 "blocks": 17280, 00:17:46.727 "percent": 9 00:17:46.727 } 00:17:46.727 }, 00:17:46.727 "base_bdevs_list": [ 00:17:46.727 { 00:17:46.727 "name": "spare", 00:17:46.727 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:46.727 "is_configured": true, 00:17:46.727 "data_offset": 2048, 00:17:46.727 "data_size": 63488 00:17:46.727 }, 00:17:46.727 { 00:17:46.727 "name": "BaseBdev2", 00:17:46.727 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:46.727 "is_configured": true, 00:17:46.728 "data_offset": 2048, 00:17:46.728 "data_size": 63488 00:17:46.728 }, 00:17:46.728 { 00:17:46.728 "name": "BaseBdev3", 00:17:46.728 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:46.728 "is_configured": true, 00:17:46.728 "data_offset": 2048, 00:17:46.728 "data_size": 63488 00:17:46.728 }, 00:17:46.728 { 00:17:46.728 "name": "BaseBdev4", 00:17:46.728 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:46.728 "is_configured": true, 00:17:46.728 "data_offset": 2048, 00:17:46.728 "data_size": 63488 00:17:46.728 } 00:17:46.728 ] 00:17:46.728 }' 00:17:46.728 19:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.728 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.728 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.728 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.728 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:46.728 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.728 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.728 [2024-11-26 19:04:38.069936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.986 [2024-11-26 19:04:38.121954] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:46.986 [2024-11-26 19:04:38.122088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.986 [2024-11-26 19:04:38.122115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.986 [2024-11-26 19:04:38.122132] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.986 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.986 "name": "raid_bdev1", 00:17:46.986 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:46.986 "strip_size_kb": 64, 00:17:46.986 "state": "online", 00:17:46.986 "raid_level": "raid5f", 00:17:46.986 "superblock": true, 00:17:46.986 "num_base_bdevs": 4, 00:17:46.986 "num_base_bdevs_discovered": 3, 00:17:46.986 "num_base_bdevs_operational": 3, 00:17:46.986 "base_bdevs_list": [ 00:17:46.986 { 00:17:46.986 "name": null, 00:17:46.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.986 "is_configured": false, 00:17:46.986 "data_offset": 0, 00:17:46.986 "data_size": 63488 00:17:46.986 }, 00:17:46.986 { 00:17:46.987 "name": "BaseBdev2", 00:17:46.987 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:46.987 "is_configured": true, 00:17:46.987 "data_offset": 2048, 00:17:46.987 "data_size": 63488 00:17:46.987 }, 00:17:46.987 { 00:17:46.987 "name": "BaseBdev3", 00:17:46.987 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:46.987 "is_configured": true, 00:17:46.987 "data_offset": 2048, 00:17:46.987 "data_size": 63488 00:17:46.987 }, 00:17:46.987 { 00:17:46.987 "name": "BaseBdev4", 00:17:46.987 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:46.987 "is_configured": true, 00:17:46.987 "data_offset": 2048, 00:17:46.987 "data_size": 63488 00:17:46.987 } 00:17:46.987 ] 00:17:46.987 }' 00:17:46.987 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.987 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.554 "name": "raid_bdev1", 00:17:47.554 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:47.554 "strip_size_kb": 64, 00:17:47.554 "state": "online", 00:17:47.554 "raid_level": "raid5f", 00:17:47.554 "superblock": true, 00:17:47.554 "num_base_bdevs": 4, 00:17:47.554 "num_base_bdevs_discovered": 3, 00:17:47.554 "num_base_bdevs_operational": 3, 00:17:47.554 "base_bdevs_list": [ 00:17:47.554 { 00:17:47.554 "name": null, 00:17:47.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.554 "is_configured": false, 00:17:47.554 "data_offset": 0, 00:17:47.554 "data_size": 63488 00:17:47.554 }, 00:17:47.554 { 00:17:47.554 "name": "BaseBdev2", 00:17:47.554 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:47.554 "is_configured": true, 00:17:47.554 "data_offset": 2048, 00:17:47.554 "data_size": 63488 00:17:47.554 }, 00:17:47.554 { 00:17:47.554 "name": "BaseBdev3", 00:17:47.554 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:47.554 "is_configured": true, 00:17:47.554 "data_offset": 2048, 00:17:47.554 "data_size": 63488 00:17:47.554 }, 00:17:47.554 { 00:17:47.554 "name": "BaseBdev4", 00:17:47.554 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:47.554 "is_configured": true, 00:17:47.554 "data_offset": 2048, 00:17:47.554 "data_size": 63488 00:17:47.554 } 00:17:47.554 ] 00:17:47.554 }' 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.554 [2024-11-26 19:04:38.862053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.554 [2024-11-26 19:04:38.875781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.554 19:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:47.554 [2024-11-26 19:04:38.884848] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.930 "name": "raid_bdev1", 00:17:48.930 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:48.930 "strip_size_kb": 64, 00:17:48.930 "state": "online", 00:17:48.930 "raid_level": "raid5f", 00:17:48.930 "superblock": true, 00:17:48.930 "num_base_bdevs": 4, 00:17:48.930 "num_base_bdevs_discovered": 4, 00:17:48.930 "num_base_bdevs_operational": 4, 00:17:48.930 "process": { 00:17:48.930 "type": "rebuild", 00:17:48.930 "target": "spare", 00:17:48.930 "progress": { 00:17:48.930 "blocks": 17280, 00:17:48.930 "percent": 9 00:17:48.930 } 00:17:48.930 }, 00:17:48.930 "base_bdevs_list": [ 00:17:48.930 { 00:17:48.930 "name": "spare", 00:17:48.930 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:48.930 "is_configured": true, 00:17:48.930 "data_offset": 2048, 00:17:48.930 "data_size": 63488 00:17:48.930 }, 00:17:48.930 { 00:17:48.930 "name": "BaseBdev2", 00:17:48.930 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:48.930 "is_configured": true, 00:17:48.930 "data_offset": 2048, 00:17:48.930 "data_size": 63488 00:17:48.930 }, 00:17:48.930 { 00:17:48.930 "name": "BaseBdev3", 00:17:48.930 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:48.930 "is_configured": true, 00:17:48.930 "data_offset": 2048, 00:17:48.930 "data_size": 63488 00:17:48.930 }, 00:17:48.930 { 00:17:48.930 "name": "BaseBdev4", 00:17:48.930 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:48.930 "is_configured": true, 00:17:48.930 "data_offset": 2048, 00:17:48.930 "data_size": 63488 00:17:48.930 } 00:17:48.930 ] 00:17:48.930 }' 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.930 19:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.930 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.930 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:48.931 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=699 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.931 "name": "raid_bdev1", 00:17:48.931 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:48.931 "strip_size_kb": 64, 00:17:48.931 "state": "online", 00:17:48.931 "raid_level": "raid5f", 00:17:48.931 "superblock": true, 00:17:48.931 "num_base_bdevs": 4, 00:17:48.931 "num_base_bdevs_discovered": 4, 00:17:48.931 "num_base_bdevs_operational": 4, 00:17:48.931 "process": { 00:17:48.931 "type": "rebuild", 00:17:48.931 "target": "spare", 00:17:48.931 "progress": { 00:17:48.931 "blocks": 21120, 00:17:48.931 "percent": 11 00:17:48.931 } 00:17:48.931 }, 00:17:48.931 "base_bdevs_list": [ 00:17:48.931 { 00:17:48.931 "name": "spare", 00:17:48.931 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:48.931 "is_configured": true, 00:17:48.931 "data_offset": 2048, 00:17:48.931 "data_size": 63488 00:17:48.931 }, 00:17:48.931 { 00:17:48.931 "name": "BaseBdev2", 00:17:48.931 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:48.931 "is_configured": true, 00:17:48.931 "data_offset": 2048, 00:17:48.931 "data_size": 63488 00:17:48.931 }, 00:17:48.931 { 00:17:48.931 "name": "BaseBdev3", 00:17:48.931 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:48.931 "is_configured": true, 00:17:48.931 "data_offset": 2048, 00:17:48.931 "data_size": 63488 00:17:48.931 }, 00:17:48.931 { 00:17:48.931 "name": "BaseBdev4", 00:17:48.931 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:48.931 "is_configured": true, 00:17:48.931 "data_offset": 2048, 00:17:48.931 "data_size": 63488 00:17:48.931 } 00:17:48.931 ] 00:17:48.931 }' 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.931 19:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.868 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.126 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.126 "name": "raid_bdev1", 00:17:50.126 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:50.126 "strip_size_kb": 64, 00:17:50.126 "state": "online", 00:17:50.126 "raid_level": "raid5f", 00:17:50.126 "superblock": true, 00:17:50.126 "num_base_bdevs": 4, 00:17:50.126 "num_base_bdevs_discovered": 4, 00:17:50.126 "num_base_bdevs_operational": 4, 00:17:50.126 "process": { 00:17:50.127 "type": "rebuild", 00:17:50.127 "target": "spare", 00:17:50.127 "progress": { 00:17:50.127 "blocks": 44160, 00:17:50.127 "percent": 23 00:17:50.127 } 00:17:50.127 }, 00:17:50.127 "base_bdevs_list": [ 00:17:50.127 { 00:17:50.127 "name": "spare", 00:17:50.127 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:50.127 "is_configured": true, 00:17:50.127 "data_offset": 2048, 00:17:50.127 "data_size": 63488 00:17:50.127 }, 00:17:50.127 { 00:17:50.127 "name": "BaseBdev2", 00:17:50.127 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:50.127 "is_configured": true, 00:17:50.127 "data_offset": 2048, 00:17:50.127 "data_size": 63488 00:17:50.127 }, 00:17:50.127 { 00:17:50.127 "name": "BaseBdev3", 00:17:50.127 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:50.127 "is_configured": true, 00:17:50.127 "data_offset": 2048, 00:17:50.127 "data_size": 63488 00:17:50.127 }, 00:17:50.127 { 00:17:50.127 "name": "BaseBdev4", 00:17:50.127 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:50.127 "is_configured": true, 00:17:50.127 "data_offset": 2048, 00:17:50.127 "data_size": 63488 00:17:50.127 } 00:17:50.127 ] 00:17:50.127 }' 00:17:50.127 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.127 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.127 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.127 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.127 19:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.063 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.321 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.321 "name": "raid_bdev1", 00:17:51.321 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:51.321 "strip_size_kb": 64, 00:17:51.321 "state": "online", 00:17:51.321 "raid_level": "raid5f", 00:17:51.321 "superblock": true, 00:17:51.321 "num_base_bdevs": 4, 00:17:51.321 "num_base_bdevs_discovered": 4, 00:17:51.321 "num_base_bdevs_operational": 4, 00:17:51.321 "process": { 00:17:51.321 "type": "rebuild", 00:17:51.321 "target": "spare", 00:17:51.321 "progress": { 00:17:51.321 "blocks": 65280, 00:17:51.321 "percent": 34 00:17:51.321 } 00:17:51.321 }, 00:17:51.321 "base_bdevs_list": [ 00:17:51.321 { 00:17:51.321 "name": "spare", 00:17:51.321 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:51.321 "is_configured": true, 00:17:51.321 "data_offset": 2048, 00:17:51.321 "data_size": 63488 00:17:51.321 }, 00:17:51.321 { 00:17:51.321 "name": "BaseBdev2", 00:17:51.321 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:51.321 "is_configured": true, 00:17:51.321 "data_offset": 2048, 00:17:51.321 "data_size": 63488 00:17:51.321 }, 00:17:51.321 { 00:17:51.321 "name": "BaseBdev3", 00:17:51.321 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:51.321 "is_configured": true, 00:17:51.321 "data_offset": 2048, 00:17:51.321 "data_size": 63488 00:17:51.321 }, 00:17:51.321 { 00:17:51.321 "name": "BaseBdev4", 00:17:51.321 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:51.321 "is_configured": true, 00:17:51.321 "data_offset": 2048, 00:17:51.321 "data_size": 63488 00:17:51.321 } 00:17:51.321 ] 00:17:51.321 }' 00:17:51.321 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.321 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.321 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.321 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.321 19:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.256 "name": "raid_bdev1", 00:17:52.256 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:52.256 "strip_size_kb": 64, 00:17:52.256 "state": "online", 00:17:52.256 "raid_level": "raid5f", 00:17:52.256 "superblock": true, 00:17:52.256 "num_base_bdevs": 4, 00:17:52.256 "num_base_bdevs_discovered": 4, 00:17:52.256 "num_base_bdevs_operational": 4, 00:17:52.256 "process": { 00:17:52.256 "type": "rebuild", 00:17:52.256 "target": "spare", 00:17:52.256 "progress": { 00:17:52.256 "blocks": 88320, 00:17:52.256 "percent": 46 00:17:52.256 } 00:17:52.256 }, 00:17:52.256 "base_bdevs_list": [ 00:17:52.256 { 00:17:52.256 "name": "spare", 00:17:52.256 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:52.256 "is_configured": true, 00:17:52.256 "data_offset": 2048, 00:17:52.256 "data_size": 63488 00:17:52.256 }, 00:17:52.256 { 00:17:52.256 "name": "BaseBdev2", 00:17:52.256 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:52.256 "is_configured": true, 00:17:52.256 "data_offset": 2048, 00:17:52.256 "data_size": 63488 00:17:52.256 }, 00:17:52.256 { 00:17:52.256 "name": "BaseBdev3", 00:17:52.256 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:52.256 "is_configured": true, 00:17:52.256 "data_offset": 2048, 00:17:52.256 "data_size": 63488 00:17:52.256 }, 00:17:52.256 { 00:17:52.256 "name": "BaseBdev4", 00:17:52.256 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:52.256 "is_configured": true, 00:17:52.256 "data_offset": 2048, 00:17:52.256 "data_size": 63488 00:17:52.256 } 00:17:52.256 ] 00:17:52.256 }' 00:17:52.256 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.515 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.515 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.515 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.515 19:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.450 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.450 "name": "raid_bdev1", 00:17:53.450 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:53.451 "strip_size_kb": 64, 00:17:53.451 "state": "online", 00:17:53.451 "raid_level": "raid5f", 00:17:53.451 "superblock": true, 00:17:53.451 "num_base_bdevs": 4, 00:17:53.451 "num_base_bdevs_discovered": 4, 00:17:53.451 "num_base_bdevs_operational": 4, 00:17:53.451 "process": { 00:17:53.451 "type": "rebuild", 00:17:53.451 "target": "spare", 00:17:53.451 "progress": { 00:17:53.451 "blocks": 109440, 00:17:53.451 "percent": 57 00:17:53.451 } 00:17:53.451 }, 00:17:53.451 "base_bdevs_list": [ 00:17:53.451 { 00:17:53.451 "name": "spare", 00:17:53.451 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:53.451 "is_configured": true, 00:17:53.451 "data_offset": 2048, 00:17:53.451 "data_size": 63488 00:17:53.451 }, 00:17:53.451 { 00:17:53.451 "name": "BaseBdev2", 00:17:53.451 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:53.451 "is_configured": true, 00:17:53.451 "data_offset": 2048, 00:17:53.451 "data_size": 63488 00:17:53.451 }, 00:17:53.451 { 00:17:53.451 "name": "BaseBdev3", 00:17:53.451 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:53.451 "is_configured": true, 00:17:53.451 "data_offset": 2048, 00:17:53.451 "data_size": 63488 00:17:53.451 }, 00:17:53.451 { 00:17:53.451 "name": "BaseBdev4", 00:17:53.451 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:53.451 "is_configured": true, 00:17:53.451 "data_offset": 2048, 00:17:53.451 "data_size": 63488 00:17:53.451 } 00:17:53.451 ] 00:17:53.451 }' 00:17:53.451 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.451 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.451 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.709 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.709 19:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.651 "name": "raid_bdev1", 00:17:54.651 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:54.651 "strip_size_kb": 64, 00:17:54.651 "state": "online", 00:17:54.651 "raid_level": "raid5f", 00:17:54.651 "superblock": true, 00:17:54.651 "num_base_bdevs": 4, 00:17:54.651 "num_base_bdevs_discovered": 4, 00:17:54.651 "num_base_bdevs_operational": 4, 00:17:54.651 "process": { 00:17:54.651 "type": "rebuild", 00:17:54.651 "target": "spare", 00:17:54.651 "progress": { 00:17:54.651 "blocks": 132480, 00:17:54.651 "percent": 69 00:17:54.651 } 00:17:54.651 }, 00:17:54.651 "base_bdevs_list": [ 00:17:54.651 { 00:17:54.651 "name": "spare", 00:17:54.651 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:54.651 "is_configured": true, 00:17:54.651 "data_offset": 2048, 00:17:54.651 "data_size": 63488 00:17:54.651 }, 00:17:54.651 { 00:17:54.651 "name": "BaseBdev2", 00:17:54.651 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:54.651 "is_configured": true, 00:17:54.651 "data_offset": 2048, 00:17:54.651 "data_size": 63488 00:17:54.651 }, 00:17:54.651 { 00:17:54.651 "name": "BaseBdev3", 00:17:54.651 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:54.651 "is_configured": true, 00:17:54.651 "data_offset": 2048, 00:17:54.651 "data_size": 63488 00:17:54.651 }, 00:17:54.651 { 00:17:54.651 "name": "BaseBdev4", 00:17:54.651 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:54.651 "is_configured": true, 00:17:54.651 "data_offset": 2048, 00:17:54.651 "data_size": 63488 00:17:54.651 } 00:17:54.651 ] 00:17:54.651 }' 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.651 19:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.909 19:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.909 19:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.845 "name": "raid_bdev1", 00:17:55.845 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:55.845 "strip_size_kb": 64, 00:17:55.845 "state": "online", 00:17:55.845 "raid_level": "raid5f", 00:17:55.845 "superblock": true, 00:17:55.845 "num_base_bdevs": 4, 00:17:55.845 "num_base_bdevs_discovered": 4, 00:17:55.845 "num_base_bdevs_operational": 4, 00:17:55.845 "process": { 00:17:55.845 "type": "rebuild", 00:17:55.845 "target": "spare", 00:17:55.845 "progress": { 00:17:55.845 "blocks": 153600, 00:17:55.845 "percent": 80 00:17:55.845 } 00:17:55.845 }, 00:17:55.845 "base_bdevs_list": [ 00:17:55.845 { 00:17:55.845 "name": "spare", 00:17:55.845 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:55.845 "is_configured": true, 00:17:55.845 "data_offset": 2048, 00:17:55.845 "data_size": 63488 00:17:55.845 }, 00:17:55.845 { 00:17:55.845 "name": "BaseBdev2", 00:17:55.845 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:55.845 "is_configured": true, 00:17:55.845 "data_offset": 2048, 00:17:55.845 "data_size": 63488 00:17:55.845 }, 00:17:55.845 { 00:17:55.845 "name": "BaseBdev3", 00:17:55.845 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:55.845 "is_configured": true, 00:17:55.845 "data_offset": 2048, 00:17:55.845 "data_size": 63488 00:17:55.845 }, 00:17:55.845 { 00:17:55.845 "name": "BaseBdev4", 00:17:55.845 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:55.845 "is_configured": true, 00:17:55.845 "data_offset": 2048, 00:17:55.845 "data_size": 63488 00:17:55.845 } 00:17:55.845 ] 00:17:55.845 }' 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.845 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.846 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.846 19:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.221 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.221 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.221 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.221 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.221 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.222 "name": "raid_bdev1", 00:17:57.222 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:57.222 "strip_size_kb": 64, 00:17:57.222 "state": "online", 00:17:57.222 "raid_level": "raid5f", 00:17:57.222 "superblock": true, 00:17:57.222 "num_base_bdevs": 4, 00:17:57.222 "num_base_bdevs_discovered": 4, 00:17:57.222 "num_base_bdevs_operational": 4, 00:17:57.222 "process": { 00:17:57.222 "type": "rebuild", 00:17:57.222 "target": "spare", 00:17:57.222 "progress": { 00:17:57.222 "blocks": 176640, 00:17:57.222 "percent": 92 00:17:57.222 } 00:17:57.222 }, 00:17:57.222 "base_bdevs_list": [ 00:17:57.222 { 00:17:57.222 "name": "spare", 00:17:57.222 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:57.222 "is_configured": true, 00:17:57.222 "data_offset": 2048, 00:17:57.222 "data_size": 63488 00:17:57.222 }, 00:17:57.222 { 00:17:57.222 "name": "BaseBdev2", 00:17:57.222 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:57.222 "is_configured": true, 00:17:57.222 "data_offset": 2048, 00:17:57.222 "data_size": 63488 00:17:57.222 }, 00:17:57.222 { 00:17:57.222 "name": "BaseBdev3", 00:17:57.222 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:57.222 "is_configured": true, 00:17:57.222 "data_offset": 2048, 00:17:57.222 "data_size": 63488 00:17:57.222 }, 00:17:57.222 { 00:17:57.222 "name": "BaseBdev4", 00:17:57.222 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:57.222 "is_configured": true, 00:17:57.222 "data_offset": 2048, 00:17:57.222 "data_size": 63488 00:17:57.222 } 00:17:57.222 ] 00:17:57.222 }' 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.222 19:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.790 [2024-11-26 19:04:48.993207] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:57.790 [2024-11-26 19:04:48.993333] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:57.790 [2024-11-26 19:04:48.993548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.047 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.306 "name": "raid_bdev1", 00:17:58.306 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:58.306 "strip_size_kb": 64, 00:17:58.306 "state": "online", 00:17:58.306 "raid_level": "raid5f", 00:17:58.306 "superblock": true, 00:17:58.306 "num_base_bdevs": 4, 00:17:58.306 "num_base_bdevs_discovered": 4, 00:17:58.306 "num_base_bdevs_operational": 4, 00:17:58.306 "base_bdevs_list": [ 00:17:58.306 { 00:17:58.306 "name": "spare", 00:17:58.306 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:58.306 "is_configured": true, 00:17:58.306 "data_offset": 2048, 00:17:58.306 "data_size": 63488 00:17:58.306 }, 00:17:58.306 { 00:17:58.306 "name": "BaseBdev2", 00:17:58.306 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:58.306 "is_configured": true, 00:17:58.306 "data_offset": 2048, 00:17:58.306 "data_size": 63488 00:17:58.306 }, 00:17:58.306 { 00:17:58.306 "name": "BaseBdev3", 00:17:58.306 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:58.306 "is_configured": true, 00:17:58.306 "data_offset": 2048, 00:17:58.306 "data_size": 63488 00:17:58.306 }, 00:17:58.306 { 00:17:58.306 "name": "BaseBdev4", 00:17:58.306 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:58.306 "is_configured": true, 00:17:58.306 "data_offset": 2048, 00:17:58.306 "data_size": 63488 00:17:58.306 } 00:17:58.306 ] 00:17:58.306 }' 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.306 "name": "raid_bdev1", 00:17:58.306 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:58.306 "strip_size_kb": 64, 00:17:58.306 "state": "online", 00:17:58.306 "raid_level": "raid5f", 00:17:58.306 "superblock": true, 00:17:58.306 "num_base_bdevs": 4, 00:17:58.306 "num_base_bdevs_discovered": 4, 00:17:58.306 "num_base_bdevs_operational": 4, 00:17:58.306 "base_bdevs_list": [ 00:17:58.306 { 00:17:58.306 "name": "spare", 00:17:58.306 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:58.306 "is_configured": true, 00:17:58.306 "data_offset": 2048, 00:17:58.306 "data_size": 63488 00:17:58.306 }, 00:17:58.306 { 00:17:58.306 "name": "BaseBdev2", 00:17:58.306 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:58.306 "is_configured": true, 00:17:58.306 "data_offset": 2048, 00:17:58.306 "data_size": 63488 00:17:58.306 }, 00:17:58.306 { 00:17:58.306 "name": "BaseBdev3", 00:17:58.306 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:58.306 "is_configured": true, 00:17:58.306 "data_offset": 2048, 00:17:58.306 "data_size": 63488 00:17:58.306 }, 00:17:58.306 { 00:17:58.306 "name": "BaseBdev4", 00:17:58.306 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:58.306 "is_configured": true, 00:17:58.306 "data_offset": 2048, 00:17:58.306 "data_size": 63488 00:17:58.306 } 00:17:58.306 ] 00:17:58.306 }' 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.306 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.565 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.565 "name": "raid_bdev1", 00:17:58.566 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:17:58.566 "strip_size_kb": 64, 00:17:58.566 "state": "online", 00:17:58.566 "raid_level": "raid5f", 00:17:58.566 "superblock": true, 00:17:58.566 "num_base_bdevs": 4, 00:17:58.566 "num_base_bdevs_discovered": 4, 00:17:58.566 "num_base_bdevs_operational": 4, 00:17:58.566 "base_bdevs_list": [ 00:17:58.566 { 00:17:58.566 "name": "spare", 00:17:58.566 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:17:58.566 "is_configured": true, 00:17:58.566 "data_offset": 2048, 00:17:58.566 "data_size": 63488 00:17:58.566 }, 00:17:58.566 { 00:17:58.566 "name": "BaseBdev2", 00:17:58.566 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:17:58.566 "is_configured": true, 00:17:58.566 "data_offset": 2048, 00:17:58.566 "data_size": 63488 00:17:58.566 }, 00:17:58.566 { 00:17:58.566 "name": "BaseBdev3", 00:17:58.566 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:17:58.566 "is_configured": true, 00:17:58.566 "data_offset": 2048, 00:17:58.566 "data_size": 63488 00:17:58.566 }, 00:17:58.566 { 00:17:58.566 "name": "BaseBdev4", 00:17:58.566 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:17:58.566 "is_configured": true, 00:17:58.566 "data_offset": 2048, 00:17:58.566 "data_size": 63488 00:17:58.566 } 00:17:58.566 ] 00:17:58.566 }' 00:17:58.566 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.566 19:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.824 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:58.824 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.824 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.083 [2024-11-26 19:04:50.192325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.083 [2024-11-26 19:04:50.192537] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.083 [2024-11-26 19:04:50.192711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.083 [2024-11-26 19:04:50.192845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.083 [2024-11-26 19:04:50.192876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:59.083 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.083 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.084 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:59.375 /dev/nbd0 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:59.375 1+0 records in 00:17:59.375 1+0 records out 00:17:59.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318497 s, 12.9 MB/s 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.375 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:59.634 /dev/nbd1 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:59.634 1+0 records in 00:17:59.634 1+0 records out 00:17:59.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358131 s, 11.4 MB/s 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.634 19:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:59.895 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:59.895 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:59.895 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.895 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:59.895 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:59.895 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:59.895 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:00.156 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.415 [2024-11-26 19:04:51.681723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:00.415 [2024-11-26 19:04:51.681792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.415 [2024-11-26 19:04:51.681827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:00.415 [2024-11-26 19:04:51.681842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.415 [2024-11-26 19:04:51.685120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.415 [2024-11-26 19:04:51.685165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:00.415 [2024-11-26 19:04:51.685286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:00.415 [2024-11-26 19:04:51.685372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.415 [2024-11-26 19:04:51.685580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.415 [2024-11-26 19:04:51.685738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:00.415 [2024-11-26 19:04:51.685915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:00.415 spare 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.415 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.674 [2024-11-26 19:04:51.786054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:00.674 [2024-11-26 19:04:51.786315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:00.674 [2024-11-26 19:04:51.786815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:00.674 [2024-11-26 19:04:51.793588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:00.674 [2024-11-26 19:04:51.793730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:00.674 [2024-11-26 19:04:51.794180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.674 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.675 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.675 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.675 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.675 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.675 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.675 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.675 "name": "raid_bdev1", 00:18:00.675 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:00.675 "strip_size_kb": 64, 00:18:00.675 "state": "online", 00:18:00.675 "raid_level": "raid5f", 00:18:00.675 "superblock": true, 00:18:00.675 "num_base_bdevs": 4, 00:18:00.675 "num_base_bdevs_discovered": 4, 00:18:00.675 "num_base_bdevs_operational": 4, 00:18:00.675 "base_bdevs_list": [ 00:18:00.675 { 00:18:00.675 "name": "spare", 00:18:00.675 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:18:00.675 "is_configured": true, 00:18:00.675 "data_offset": 2048, 00:18:00.675 "data_size": 63488 00:18:00.675 }, 00:18:00.675 { 00:18:00.675 "name": "BaseBdev2", 00:18:00.675 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:00.675 "is_configured": true, 00:18:00.675 "data_offset": 2048, 00:18:00.675 "data_size": 63488 00:18:00.675 }, 00:18:00.675 { 00:18:00.675 "name": "BaseBdev3", 00:18:00.675 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:00.675 "is_configured": true, 00:18:00.675 "data_offset": 2048, 00:18:00.675 "data_size": 63488 00:18:00.675 }, 00:18:00.675 { 00:18:00.675 "name": "BaseBdev4", 00:18:00.675 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:00.675 "is_configured": true, 00:18:00.675 "data_offset": 2048, 00:18:00.675 "data_size": 63488 00:18:00.675 } 00:18:00.675 ] 00:18:00.675 }' 00:18:00.675 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.675 19:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.934 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.934 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.934 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.934 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.934 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.193 "name": "raid_bdev1", 00:18:01.193 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:01.193 "strip_size_kb": 64, 00:18:01.193 "state": "online", 00:18:01.193 "raid_level": "raid5f", 00:18:01.193 "superblock": true, 00:18:01.193 "num_base_bdevs": 4, 00:18:01.193 "num_base_bdevs_discovered": 4, 00:18:01.193 "num_base_bdevs_operational": 4, 00:18:01.193 "base_bdevs_list": [ 00:18:01.193 { 00:18:01.193 "name": "spare", 00:18:01.193 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:18:01.193 "is_configured": true, 00:18:01.193 "data_offset": 2048, 00:18:01.193 "data_size": 63488 00:18:01.193 }, 00:18:01.193 { 00:18:01.193 "name": "BaseBdev2", 00:18:01.193 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:01.193 "is_configured": true, 00:18:01.193 "data_offset": 2048, 00:18:01.193 "data_size": 63488 00:18:01.193 }, 00:18:01.193 { 00:18:01.193 "name": "BaseBdev3", 00:18:01.193 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:01.193 "is_configured": true, 00:18:01.193 "data_offset": 2048, 00:18:01.193 "data_size": 63488 00:18:01.193 }, 00:18:01.193 { 00:18:01.193 "name": "BaseBdev4", 00:18:01.193 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:01.193 "is_configured": true, 00:18:01.193 "data_offset": 2048, 00:18:01.193 "data_size": 63488 00:18:01.193 } 00:18:01.193 ] 00:18:01.193 }' 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.193 [2024-11-26 19:04:52.518346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.193 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.453 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.453 "name": "raid_bdev1", 00:18:01.453 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:01.453 "strip_size_kb": 64, 00:18:01.453 "state": "online", 00:18:01.453 "raid_level": "raid5f", 00:18:01.453 "superblock": true, 00:18:01.453 "num_base_bdevs": 4, 00:18:01.453 "num_base_bdevs_discovered": 3, 00:18:01.453 "num_base_bdevs_operational": 3, 00:18:01.453 "base_bdevs_list": [ 00:18:01.453 { 00:18:01.453 "name": null, 00:18:01.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.453 "is_configured": false, 00:18:01.453 "data_offset": 0, 00:18:01.453 "data_size": 63488 00:18:01.453 }, 00:18:01.453 { 00:18:01.453 "name": "BaseBdev2", 00:18:01.453 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:01.453 "is_configured": true, 00:18:01.453 "data_offset": 2048, 00:18:01.453 "data_size": 63488 00:18:01.453 }, 00:18:01.453 { 00:18:01.453 "name": "BaseBdev3", 00:18:01.453 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:01.453 "is_configured": true, 00:18:01.453 "data_offset": 2048, 00:18:01.453 "data_size": 63488 00:18:01.453 }, 00:18:01.453 { 00:18:01.453 "name": "BaseBdev4", 00:18:01.453 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:01.453 "is_configured": true, 00:18:01.453 "data_offset": 2048, 00:18:01.453 "data_size": 63488 00:18:01.453 } 00:18:01.453 ] 00:18:01.453 }' 00:18:01.453 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.453 19:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.712 19:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:01.712 19:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.712 19:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.712 [2024-11-26 19:04:53.046467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.712 [2024-11-26 19:04:53.046700] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:01.712 [2024-11-26 19:04:53.046727] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:01.712 [2024-11-26 19:04:53.046818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.712 [2024-11-26 19:04:53.060242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:01.712 19:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.712 19:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:01.712 [2024-11-26 19:04:53.069364] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.090 "name": "raid_bdev1", 00:18:03.090 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:03.090 "strip_size_kb": 64, 00:18:03.090 "state": "online", 00:18:03.090 "raid_level": "raid5f", 00:18:03.090 "superblock": true, 00:18:03.090 "num_base_bdevs": 4, 00:18:03.090 "num_base_bdevs_discovered": 4, 00:18:03.090 "num_base_bdevs_operational": 4, 00:18:03.090 "process": { 00:18:03.090 "type": "rebuild", 00:18:03.090 "target": "spare", 00:18:03.090 "progress": { 00:18:03.090 "blocks": 17280, 00:18:03.090 "percent": 9 00:18:03.090 } 00:18:03.090 }, 00:18:03.090 "base_bdevs_list": [ 00:18:03.090 { 00:18:03.090 "name": "spare", 00:18:03.090 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:18:03.090 "is_configured": true, 00:18:03.090 "data_offset": 2048, 00:18:03.090 "data_size": 63488 00:18:03.090 }, 00:18:03.090 { 00:18:03.090 "name": "BaseBdev2", 00:18:03.090 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:03.090 "is_configured": true, 00:18:03.090 "data_offset": 2048, 00:18:03.090 "data_size": 63488 00:18:03.090 }, 00:18:03.090 { 00:18:03.090 "name": "BaseBdev3", 00:18:03.090 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:03.090 "is_configured": true, 00:18:03.090 "data_offset": 2048, 00:18:03.090 "data_size": 63488 00:18:03.090 }, 00:18:03.090 { 00:18:03.090 "name": "BaseBdev4", 00:18:03.090 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:03.090 "is_configured": true, 00:18:03.090 "data_offset": 2048, 00:18:03.090 "data_size": 63488 00:18:03.090 } 00:18:03.090 ] 00:18:03.090 }' 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.090 [2024-11-26 19:04:54.230976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.090 [2024-11-26 19:04:54.282200] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:03.090 [2024-11-26 19:04:54.282348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.090 [2024-11-26 19:04:54.282376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.090 [2024-11-26 19:04:54.282397] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.090 "name": "raid_bdev1", 00:18:03.090 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:03.090 "strip_size_kb": 64, 00:18:03.090 "state": "online", 00:18:03.090 "raid_level": "raid5f", 00:18:03.090 "superblock": true, 00:18:03.090 "num_base_bdevs": 4, 00:18:03.090 "num_base_bdevs_discovered": 3, 00:18:03.090 "num_base_bdevs_operational": 3, 00:18:03.090 "base_bdevs_list": [ 00:18:03.090 { 00:18:03.090 "name": null, 00:18:03.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.090 "is_configured": false, 00:18:03.090 "data_offset": 0, 00:18:03.090 "data_size": 63488 00:18:03.090 }, 00:18:03.090 { 00:18:03.090 "name": "BaseBdev2", 00:18:03.090 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:03.090 "is_configured": true, 00:18:03.090 "data_offset": 2048, 00:18:03.090 "data_size": 63488 00:18:03.090 }, 00:18:03.090 { 00:18:03.090 "name": "BaseBdev3", 00:18:03.090 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:03.090 "is_configured": true, 00:18:03.090 "data_offset": 2048, 00:18:03.090 "data_size": 63488 00:18:03.090 }, 00:18:03.090 { 00:18:03.090 "name": "BaseBdev4", 00:18:03.090 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:03.090 "is_configured": true, 00:18:03.090 "data_offset": 2048, 00:18:03.090 "data_size": 63488 00:18:03.090 } 00:18:03.090 ] 00:18:03.090 }' 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.090 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.658 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:03.658 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.658 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.658 [2024-11-26 19:04:54.853841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:03.658 [2024-11-26 19:04:54.853936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.658 [2024-11-26 19:04:54.853975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:03.658 [2024-11-26 19:04:54.853994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.658 [2024-11-26 19:04:54.854639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.658 [2024-11-26 19:04:54.854676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:03.658 [2024-11-26 19:04:54.854801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:03.658 [2024-11-26 19:04:54.854833] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.658 [2024-11-26 19:04:54.854848] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:03.658 [2024-11-26 19:04:54.854884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.658 [2024-11-26 19:04:54.868499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:03.658 spare 00:18:03.658 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.658 19:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:03.658 [2024-11-26 19:04:54.877476] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.593 "name": "raid_bdev1", 00:18:04.593 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:04.593 "strip_size_kb": 64, 00:18:04.593 "state": "online", 00:18:04.593 "raid_level": "raid5f", 00:18:04.593 "superblock": true, 00:18:04.593 "num_base_bdevs": 4, 00:18:04.593 "num_base_bdevs_discovered": 4, 00:18:04.593 "num_base_bdevs_operational": 4, 00:18:04.593 "process": { 00:18:04.593 "type": "rebuild", 00:18:04.593 "target": "spare", 00:18:04.593 "progress": { 00:18:04.593 "blocks": 17280, 00:18:04.593 "percent": 9 00:18:04.593 } 00:18:04.593 }, 00:18:04.593 "base_bdevs_list": [ 00:18:04.593 { 00:18:04.593 "name": "spare", 00:18:04.593 "uuid": "5e7aa790-835d-5337-bbd1-2015e5abe826", 00:18:04.593 "is_configured": true, 00:18:04.593 "data_offset": 2048, 00:18:04.593 "data_size": 63488 00:18:04.593 }, 00:18:04.593 { 00:18:04.593 "name": "BaseBdev2", 00:18:04.593 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:04.593 "is_configured": true, 00:18:04.593 "data_offset": 2048, 00:18:04.593 "data_size": 63488 00:18:04.593 }, 00:18:04.593 { 00:18:04.593 "name": "BaseBdev3", 00:18:04.593 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:04.593 "is_configured": true, 00:18:04.593 "data_offset": 2048, 00:18:04.593 "data_size": 63488 00:18:04.593 }, 00:18:04.593 { 00:18:04.593 "name": "BaseBdev4", 00:18:04.593 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:04.593 "is_configured": true, 00:18:04.593 "data_offset": 2048, 00:18:04.593 "data_size": 63488 00:18:04.593 } 00:18:04.593 ] 00:18:04.593 }' 00:18:04.593 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.852 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.852 19:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.852 [2024-11-26 19:04:56.043412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.852 [2024-11-26 19:04:56.090371] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:04.852 [2024-11-26 19:04:56.090766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.852 [2024-11-26 19:04:56.090806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.852 [2024-11-26 19:04:56.090820] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.852 "name": "raid_bdev1", 00:18:04.852 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:04.852 "strip_size_kb": 64, 00:18:04.852 "state": "online", 00:18:04.852 "raid_level": "raid5f", 00:18:04.852 "superblock": true, 00:18:04.852 "num_base_bdevs": 4, 00:18:04.852 "num_base_bdevs_discovered": 3, 00:18:04.852 "num_base_bdevs_operational": 3, 00:18:04.852 "base_bdevs_list": [ 00:18:04.852 { 00:18:04.852 "name": null, 00:18:04.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.852 "is_configured": false, 00:18:04.852 "data_offset": 0, 00:18:04.852 "data_size": 63488 00:18:04.852 }, 00:18:04.852 { 00:18:04.852 "name": "BaseBdev2", 00:18:04.852 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:04.852 "is_configured": true, 00:18:04.852 "data_offset": 2048, 00:18:04.852 "data_size": 63488 00:18:04.852 }, 00:18:04.852 { 00:18:04.852 "name": "BaseBdev3", 00:18:04.852 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:04.852 "is_configured": true, 00:18:04.852 "data_offset": 2048, 00:18:04.852 "data_size": 63488 00:18:04.852 }, 00:18:04.852 { 00:18:04.852 "name": "BaseBdev4", 00:18:04.852 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:04.852 "is_configured": true, 00:18:04.852 "data_offset": 2048, 00:18:04.852 "data_size": 63488 00:18:04.852 } 00:18:04.852 ] 00:18:04.852 }' 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.852 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.419 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.419 "name": "raid_bdev1", 00:18:05.419 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:05.419 "strip_size_kb": 64, 00:18:05.419 "state": "online", 00:18:05.419 "raid_level": "raid5f", 00:18:05.419 "superblock": true, 00:18:05.419 "num_base_bdevs": 4, 00:18:05.419 "num_base_bdevs_discovered": 3, 00:18:05.419 "num_base_bdevs_operational": 3, 00:18:05.419 "base_bdevs_list": [ 00:18:05.419 { 00:18:05.419 "name": null, 00:18:05.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.419 "is_configured": false, 00:18:05.419 "data_offset": 0, 00:18:05.420 "data_size": 63488 00:18:05.420 }, 00:18:05.420 { 00:18:05.420 "name": "BaseBdev2", 00:18:05.420 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:05.420 "is_configured": true, 00:18:05.420 "data_offset": 2048, 00:18:05.420 "data_size": 63488 00:18:05.420 }, 00:18:05.420 { 00:18:05.420 "name": "BaseBdev3", 00:18:05.420 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:05.420 "is_configured": true, 00:18:05.420 "data_offset": 2048, 00:18:05.420 "data_size": 63488 00:18:05.420 }, 00:18:05.420 { 00:18:05.420 "name": "BaseBdev4", 00:18:05.420 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:05.420 "is_configured": true, 00:18:05.420 "data_offset": 2048, 00:18:05.420 "data_size": 63488 00:18:05.420 } 00:18:05.420 ] 00:18:05.420 }' 00:18:05.420 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.420 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.420 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.679 [2024-11-26 19:04:56.798586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:05.679 [2024-11-26 19:04:56.798780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.679 [2024-11-26 19:04:56.798864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:05.679 [2024-11-26 19:04:56.799086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.679 [2024-11-26 19:04:56.799723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.679 [2024-11-26 19:04:56.799761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:05.679 [2024-11-26 19:04:56.799887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:05.679 [2024-11-26 19:04:56.799926] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:05.679 [2024-11-26 19:04:56.799944] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:05.679 [2024-11-26 19:04:56.799958] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:05.679 BaseBdev1 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.679 19:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:06.619 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:06.619 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.619 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.619 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.619 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.620 "name": "raid_bdev1", 00:18:06.620 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:06.620 "strip_size_kb": 64, 00:18:06.620 "state": "online", 00:18:06.620 "raid_level": "raid5f", 00:18:06.620 "superblock": true, 00:18:06.620 "num_base_bdevs": 4, 00:18:06.620 "num_base_bdevs_discovered": 3, 00:18:06.620 "num_base_bdevs_operational": 3, 00:18:06.620 "base_bdevs_list": [ 00:18:06.620 { 00:18:06.620 "name": null, 00:18:06.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.620 "is_configured": false, 00:18:06.620 "data_offset": 0, 00:18:06.620 "data_size": 63488 00:18:06.620 }, 00:18:06.620 { 00:18:06.620 "name": "BaseBdev2", 00:18:06.620 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:06.620 "is_configured": true, 00:18:06.620 "data_offset": 2048, 00:18:06.620 "data_size": 63488 00:18:06.620 }, 00:18:06.620 { 00:18:06.620 "name": "BaseBdev3", 00:18:06.620 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:06.620 "is_configured": true, 00:18:06.620 "data_offset": 2048, 00:18:06.620 "data_size": 63488 00:18:06.620 }, 00:18:06.620 { 00:18:06.620 "name": "BaseBdev4", 00:18:06.620 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:06.620 "is_configured": true, 00:18:06.620 "data_offset": 2048, 00:18:06.620 "data_size": 63488 00:18:06.620 } 00:18:06.620 ] 00:18:06.620 }' 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.620 19:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.189 "name": "raid_bdev1", 00:18:07.189 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:07.189 "strip_size_kb": 64, 00:18:07.189 "state": "online", 00:18:07.189 "raid_level": "raid5f", 00:18:07.189 "superblock": true, 00:18:07.189 "num_base_bdevs": 4, 00:18:07.189 "num_base_bdevs_discovered": 3, 00:18:07.189 "num_base_bdevs_operational": 3, 00:18:07.189 "base_bdevs_list": [ 00:18:07.189 { 00:18:07.189 "name": null, 00:18:07.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.189 "is_configured": false, 00:18:07.189 "data_offset": 0, 00:18:07.189 "data_size": 63488 00:18:07.189 }, 00:18:07.189 { 00:18:07.189 "name": "BaseBdev2", 00:18:07.189 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:07.189 "is_configured": true, 00:18:07.189 "data_offset": 2048, 00:18:07.189 "data_size": 63488 00:18:07.189 }, 00:18:07.189 { 00:18:07.189 "name": "BaseBdev3", 00:18:07.189 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:07.189 "is_configured": true, 00:18:07.189 "data_offset": 2048, 00:18:07.189 "data_size": 63488 00:18:07.189 }, 00:18:07.189 { 00:18:07.189 "name": "BaseBdev4", 00:18:07.189 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:07.189 "is_configured": true, 00:18:07.189 "data_offset": 2048, 00:18:07.189 "data_size": 63488 00:18:07.189 } 00:18:07.189 ] 00:18:07.189 }' 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.189 [2024-11-26 19:04:58.491266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.189 [2024-11-26 19:04:58.491523] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.189 [2024-11-26 19:04:58.491551] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:07.189 request: 00:18:07.189 { 00:18:07.189 "base_bdev": "BaseBdev1", 00:18:07.189 "raid_bdev": "raid_bdev1", 00:18:07.189 "method": "bdev_raid_add_base_bdev", 00:18:07.189 "req_id": 1 00:18:07.189 } 00:18:07.189 Got JSON-RPC error response 00:18:07.189 response: 00:18:07.189 { 00:18:07.189 "code": -22, 00:18:07.189 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:07.189 } 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:07.189 19:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.565 "name": "raid_bdev1", 00:18:08.565 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:08.565 "strip_size_kb": 64, 00:18:08.565 "state": "online", 00:18:08.565 "raid_level": "raid5f", 00:18:08.565 "superblock": true, 00:18:08.565 "num_base_bdevs": 4, 00:18:08.565 "num_base_bdevs_discovered": 3, 00:18:08.565 "num_base_bdevs_operational": 3, 00:18:08.565 "base_bdevs_list": [ 00:18:08.565 { 00:18:08.565 "name": null, 00:18:08.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.565 "is_configured": false, 00:18:08.565 "data_offset": 0, 00:18:08.565 "data_size": 63488 00:18:08.565 }, 00:18:08.565 { 00:18:08.565 "name": "BaseBdev2", 00:18:08.565 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:08.565 "is_configured": true, 00:18:08.565 "data_offset": 2048, 00:18:08.565 "data_size": 63488 00:18:08.565 }, 00:18:08.565 { 00:18:08.565 "name": "BaseBdev3", 00:18:08.565 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:08.565 "is_configured": true, 00:18:08.565 "data_offset": 2048, 00:18:08.565 "data_size": 63488 00:18:08.565 }, 00:18:08.565 { 00:18:08.565 "name": "BaseBdev4", 00:18:08.565 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:08.565 "is_configured": true, 00:18:08.565 "data_offset": 2048, 00:18:08.565 "data_size": 63488 00:18:08.565 } 00:18:08.565 ] 00:18:08.565 }' 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.565 19:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.823 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.823 "name": "raid_bdev1", 00:18:08.824 "uuid": "78491710-ab5f-40b2-b3c4-3dc327195935", 00:18:08.824 "strip_size_kb": 64, 00:18:08.824 "state": "online", 00:18:08.824 "raid_level": "raid5f", 00:18:08.824 "superblock": true, 00:18:08.824 "num_base_bdevs": 4, 00:18:08.824 "num_base_bdevs_discovered": 3, 00:18:08.824 "num_base_bdevs_operational": 3, 00:18:08.824 "base_bdevs_list": [ 00:18:08.824 { 00:18:08.824 "name": null, 00:18:08.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.824 "is_configured": false, 00:18:08.824 "data_offset": 0, 00:18:08.824 "data_size": 63488 00:18:08.824 }, 00:18:08.824 { 00:18:08.824 "name": "BaseBdev2", 00:18:08.824 "uuid": "4a549616-9a1b-5518-8743-4b33199454d3", 00:18:08.824 "is_configured": true, 00:18:08.824 "data_offset": 2048, 00:18:08.824 "data_size": 63488 00:18:08.824 }, 00:18:08.824 { 00:18:08.824 "name": "BaseBdev3", 00:18:08.824 "uuid": "ae78237d-7e2a-5a2c-bc48-2922de8ca762", 00:18:08.824 "is_configured": true, 00:18:08.824 "data_offset": 2048, 00:18:08.824 "data_size": 63488 00:18:08.824 }, 00:18:08.824 { 00:18:08.824 "name": "BaseBdev4", 00:18:08.824 "uuid": "a3ff20ab-5a00-587b-a3fa-847c70a7a4c6", 00:18:08.824 "is_configured": true, 00:18:08.824 "data_offset": 2048, 00:18:08.824 "data_size": 63488 00:18:08.824 } 00:18:08.824 ] 00:18:08.824 }' 00:18:08.824 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.824 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.824 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85578 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85578 ']' 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85578 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85578 00:18:09.082 killing process with pid 85578 00:18:09.082 Received shutdown signal, test time was about 60.000000 seconds 00:18:09.082 00:18:09.082 Latency(us) 00:18:09.082 [2024-11-26T19:05:00.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.082 [2024-11-26T19:05:00.449Z] =================================================================================================================== 00:18:09.082 [2024-11-26T19:05:00.449Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85578' 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85578 00:18:09.082 [2024-11-26 19:05:00.225423] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.082 19:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85578 00:18:09.082 [2024-11-26 19:05:00.225589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.082 [2024-11-26 19:05:00.225687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.082 [2024-11-26 19:05:00.225706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:09.340 [2024-11-26 19:05:00.666231] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:10.714 ************************************ 00:18:10.714 END TEST raid5f_rebuild_test_sb 00:18:10.714 ************************************ 00:18:10.714 19:05:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:10.714 00:18:10.714 real 0m28.744s 00:18:10.714 user 0m37.491s 00:18:10.714 sys 0m2.893s 00:18:10.714 19:05:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.714 19:05:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.714 19:05:01 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:10.714 19:05:01 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:10.714 19:05:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:10.714 19:05:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.714 19:05:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:10.714 ************************************ 00:18:10.714 START TEST raid_state_function_test_sb_4k 00:18:10.714 ************************************ 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86402 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:10.714 Process raid pid: 86402 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86402' 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86402 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86402 ']' 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.714 19:05:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.714 [2024-11-26 19:05:01.900068] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:18:10.714 [2024-11-26 19:05:01.900235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.975 [2024-11-26 19:05:02.091227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.975 [2024-11-26 19:05:02.250607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.237 [2024-11-26 19:05:02.460029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.237 [2024-11-26 19:05:02.460083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.804 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.804 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:11.804 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:11.804 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.804 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.804 [2024-11-26 19:05:02.880372] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.804 [2024-11-26 19:05:02.880447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.804 [2024-11-26 19:05:02.880464] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.804 [2024-11-26 19:05:02.880480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.804 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.805 "name": "Existed_Raid", 00:18:11.805 "uuid": "b423b5ad-c855-4846-9f11-14e3a78ebc6c", 00:18:11.805 "strip_size_kb": 0, 00:18:11.805 "state": "configuring", 00:18:11.805 "raid_level": "raid1", 00:18:11.805 "superblock": true, 00:18:11.805 "num_base_bdevs": 2, 00:18:11.805 "num_base_bdevs_discovered": 0, 00:18:11.805 "num_base_bdevs_operational": 2, 00:18:11.805 "base_bdevs_list": [ 00:18:11.805 { 00:18:11.805 "name": "BaseBdev1", 00:18:11.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.805 "is_configured": false, 00:18:11.805 "data_offset": 0, 00:18:11.805 "data_size": 0 00:18:11.805 }, 00:18:11.805 { 00:18:11.805 "name": "BaseBdev2", 00:18:11.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.805 "is_configured": false, 00:18:11.805 "data_offset": 0, 00:18:11.805 "data_size": 0 00:18:11.805 } 00:18:11.805 ] 00:18:11.805 }' 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.805 19:05:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.064 [2024-11-26 19:05:03.388498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:12.064 [2024-11-26 19:05:03.388558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.064 [2024-11-26 19:05:03.396527] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.064 [2024-11-26 19:05:03.396603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.064 [2024-11-26 19:05:03.396626] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.064 [2024-11-26 19:05:03.396654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.064 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.324 [2024-11-26 19:05:03.443055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.324 BaseBdev1 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.324 [ 00:18:12.324 { 00:18:12.324 "name": "BaseBdev1", 00:18:12.324 "aliases": [ 00:18:12.324 "3f26f9ea-6062-4eb6-b90c-2de41088db7b" 00:18:12.324 ], 00:18:12.324 "product_name": "Malloc disk", 00:18:12.324 "block_size": 4096, 00:18:12.324 "num_blocks": 8192, 00:18:12.324 "uuid": "3f26f9ea-6062-4eb6-b90c-2de41088db7b", 00:18:12.324 "assigned_rate_limits": { 00:18:12.324 "rw_ios_per_sec": 0, 00:18:12.324 "rw_mbytes_per_sec": 0, 00:18:12.324 "r_mbytes_per_sec": 0, 00:18:12.324 "w_mbytes_per_sec": 0 00:18:12.324 }, 00:18:12.324 "claimed": true, 00:18:12.324 "claim_type": "exclusive_write", 00:18:12.324 "zoned": false, 00:18:12.324 "supported_io_types": { 00:18:12.324 "read": true, 00:18:12.324 "write": true, 00:18:12.324 "unmap": true, 00:18:12.324 "flush": true, 00:18:12.324 "reset": true, 00:18:12.324 "nvme_admin": false, 00:18:12.324 "nvme_io": false, 00:18:12.324 "nvme_io_md": false, 00:18:12.324 "write_zeroes": true, 00:18:12.324 "zcopy": true, 00:18:12.324 "get_zone_info": false, 00:18:12.324 "zone_management": false, 00:18:12.324 "zone_append": false, 00:18:12.324 "compare": false, 00:18:12.324 "compare_and_write": false, 00:18:12.324 "abort": true, 00:18:12.324 "seek_hole": false, 00:18:12.324 "seek_data": false, 00:18:12.324 "copy": true, 00:18:12.324 "nvme_iov_md": false 00:18:12.324 }, 00:18:12.324 "memory_domains": [ 00:18:12.324 { 00:18:12.324 "dma_device_id": "system", 00:18:12.324 "dma_device_type": 1 00:18:12.324 }, 00:18:12.324 { 00:18:12.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.324 "dma_device_type": 2 00:18:12.324 } 00:18:12.324 ], 00:18:12.324 "driver_specific": {} 00:18:12.324 } 00:18:12.324 ] 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.324 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.325 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.325 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.325 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.325 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.325 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.325 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.325 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.325 "name": "Existed_Raid", 00:18:12.325 "uuid": "41b1aa41-7b2e-4a62-814b-13fceda10200", 00:18:12.325 "strip_size_kb": 0, 00:18:12.325 "state": "configuring", 00:18:12.325 "raid_level": "raid1", 00:18:12.325 "superblock": true, 00:18:12.325 "num_base_bdevs": 2, 00:18:12.325 "num_base_bdevs_discovered": 1, 00:18:12.325 "num_base_bdevs_operational": 2, 00:18:12.325 "base_bdevs_list": [ 00:18:12.325 { 00:18:12.325 "name": "BaseBdev1", 00:18:12.325 "uuid": "3f26f9ea-6062-4eb6-b90c-2de41088db7b", 00:18:12.325 "is_configured": true, 00:18:12.325 "data_offset": 256, 00:18:12.325 "data_size": 7936 00:18:12.325 }, 00:18:12.325 { 00:18:12.325 "name": "BaseBdev2", 00:18:12.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.325 "is_configured": false, 00:18:12.325 "data_offset": 0, 00:18:12.325 "data_size": 0 00:18:12.325 } 00:18:12.325 ] 00:18:12.325 }' 00:18:12.325 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.325 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.894 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:12.894 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.894 19:05:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.894 [2024-11-26 19:05:03.999280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:12.894 [2024-11-26 19:05:03.999370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.894 [2024-11-26 19:05:04.011344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.894 [2024-11-26 19:05:04.014125] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.894 [2024-11-26 19:05:04.014178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.894 "name": "Existed_Raid", 00:18:12.894 "uuid": "280d4e1b-d766-420b-8197-918fcf0555e3", 00:18:12.894 "strip_size_kb": 0, 00:18:12.894 "state": "configuring", 00:18:12.894 "raid_level": "raid1", 00:18:12.894 "superblock": true, 00:18:12.894 "num_base_bdevs": 2, 00:18:12.894 "num_base_bdevs_discovered": 1, 00:18:12.894 "num_base_bdevs_operational": 2, 00:18:12.894 "base_bdevs_list": [ 00:18:12.894 { 00:18:12.894 "name": "BaseBdev1", 00:18:12.894 "uuid": "3f26f9ea-6062-4eb6-b90c-2de41088db7b", 00:18:12.894 "is_configured": true, 00:18:12.894 "data_offset": 256, 00:18:12.894 "data_size": 7936 00:18:12.894 }, 00:18:12.894 { 00:18:12.894 "name": "BaseBdev2", 00:18:12.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.894 "is_configured": false, 00:18:12.894 "data_offset": 0, 00:18:12.894 "data_size": 0 00:18:12.894 } 00:18:12.894 ] 00:18:12.894 }' 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.894 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.464 [2024-11-26 19:05:04.569644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.464 [2024-11-26 19:05:04.569982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:13.464 [2024-11-26 19:05:04.570002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.464 BaseBdev2 00:18:13.464 [2024-11-26 19:05:04.570333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:13.464 [2024-11-26 19:05:04.570543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:13.464 [2024-11-26 19:05:04.570565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:13.464 [2024-11-26 19:05:04.570735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.464 [ 00:18:13.464 { 00:18:13.464 "name": "BaseBdev2", 00:18:13.464 "aliases": [ 00:18:13.464 "122e7101-2258-464b-bc3d-ba64c423712c" 00:18:13.464 ], 00:18:13.464 "product_name": "Malloc disk", 00:18:13.464 "block_size": 4096, 00:18:13.464 "num_blocks": 8192, 00:18:13.464 "uuid": "122e7101-2258-464b-bc3d-ba64c423712c", 00:18:13.464 "assigned_rate_limits": { 00:18:13.464 "rw_ios_per_sec": 0, 00:18:13.464 "rw_mbytes_per_sec": 0, 00:18:13.464 "r_mbytes_per_sec": 0, 00:18:13.464 "w_mbytes_per_sec": 0 00:18:13.464 }, 00:18:13.464 "claimed": true, 00:18:13.464 "claim_type": "exclusive_write", 00:18:13.464 "zoned": false, 00:18:13.464 "supported_io_types": { 00:18:13.464 "read": true, 00:18:13.464 "write": true, 00:18:13.464 "unmap": true, 00:18:13.464 "flush": true, 00:18:13.464 "reset": true, 00:18:13.464 "nvme_admin": false, 00:18:13.464 "nvme_io": false, 00:18:13.464 "nvme_io_md": false, 00:18:13.464 "write_zeroes": true, 00:18:13.464 "zcopy": true, 00:18:13.464 "get_zone_info": false, 00:18:13.464 "zone_management": false, 00:18:13.464 "zone_append": false, 00:18:13.464 "compare": false, 00:18:13.464 "compare_and_write": false, 00:18:13.464 "abort": true, 00:18:13.464 "seek_hole": false, 00:18:13.464 "seek_data": false, 00:18:13.464 "copy": true, 00:18:13.464 "nvme_iov_md": false 00:18:13.464 }, 00:18:13.464 "memory_domains": [ 00:18:13.464 { 00:18:13.464 "dma_device_id": "system", 00:18:13.464 "dma_device_type": 1 00:18:13.464 }, 00:18:13.464 { 00:18:13.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.464 "dma_device_type": 2 00:18:13.464 } 00:18:13.464 ], 00:18:13.464 "driver_specific": {} 00:18:13.464 } 00:18:13.464 ] 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.464 "name": "Existed_Raid", 00:18:13.464 "uuid": "280d4e1b-d766-420b-8197-918fcf0555e3", 00:18:13.464 "strip_size_kb": 0, 00:18:13.464 "state": "online", 00:18:13.464 "raid_level": "raid1", 00:18:13.464 "superblock": true, 00:18:13.464 "num_base_bdevs": 2, 00:18:13.464 "num_base_bdevs_discovered": 2, 00:18:13.464 "num_base_bdevs_operational": 2, 00:18:13.464 "base_bdevs_list": [ 00:18:13.464 { 00:18:13.464 "name": "BaseBdev1", 00:18:13.464 "uuid": "3f26f9ea-6062-4eb6-b90c-2de41088db7b", 00:18:13.464 "is_configured": true, 00:18:13.464 "data_offset": 256, 00:18:13.464 "data_size": 7936 00:18:13.464 }, 00:18:13.464 { 00:18:13.464 "name": "BaseBdev2", 00:18:13.464 "uuid": "122e7101-2258-464b-bc3d-ba64c423712c", 00:18:13.464 "is_configured": true, 00:18:13.464 "data_offset": 256, 00:18:13.464 "data_size": 7936 00:18:13.464 } 00:18:13.464 ] 00:18:13.464 }' 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.464 19:05:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.032 [2024-11-26 19:05:05.154253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.032 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.032 "name": "Existed_Raid", 00:18:14.032 "aliases": [ 00:18:14.032 "280d4e1b-d766-420b-8197-918fcf0555e3" 00:18:14.032 ], 00:18:14.032 "product_name": "Raid Volume", 00:18:14.032 "block_size": 4096, 00:18:14.032 "num_blocks": 7936, 00:18:14.032 "uuid": "280d4e1b-d766-420b-8197-918fcf0555e3", 00:18:14.032 "assigned_rate_limits": { 00:18:14.032 "rw_ios_per_sec": 0, 00:18:14.032 "rw_mbytes_per_sec": 0, 00:18:14.032 "r_mbytes_per_sec": 0, 00:18:14.032 "w_mbytes_per_sec": 0 00:18:14.032 }, 00:18:14.032 "claimed": false, 00:18:14.032 "zoned": false, 00:18:14.032 "supported_io_types": { 00:18:14.032 "read": true, 00:18:14.032 "write": true, 00:18:14.032 "unmap": false, 00:18:14.032 "flush": false, 00:18:14.032 "reset": true, 00:18:14.032 "nvme_admin": false, 00:18:14.032 "nvme_io": false, 00:18:14.032 "nvme_io_md": false, 00:18:14.032 "write_zeroes": true, 00:18:14.032 "zcopy": false, 00:18:14.032 "get_zone_info": false, 00:18:14.032 "zone_management": false, 00:18:14.032 "zone_append": false, 00:18:14.032 "compare": false, 00:18:14.032 "compare_and_write": false, 00:18:14.033 "abort": false, 00:18:14.033 "seek_hole": false, 00:18:14.033 "seek_data": false, 00:18:14.033 "copy": false, 00:18:14.033 "nvme_iov_md": false 00:18:14.033 }, 00:18:14.033 "memory_domains": [ 00:18:14.033 { 00:18:14.033 "dma_device_id": "system", 00:18:14.033 "dma_device_type": 1 00:18:14.033 }, 00:18:14.033 { 00:18:14.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.033 "dma_device_type": 2 00:18:14.033 }, 00:18:14.033 { 00:18:14.033 "dma_device_id": "system", 00:18:14.033 "dma_device_type": 1 00:18:14.033 }, 00:18:14.033 { 00:18:14.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.033 "dma_device_type": 2 00:18:14.033 } 00:18:14.033 ], 00:18:14.033 "driver_specific": { 00:18:14.033 "raid": { 00:18:14.033 "uuid": "280d4e1b-d766-420b-8197-918fcf0555e3", 00:18:14.033 "strip_size_kb": 0, 00:18:14.033 "state": "online", 00:18:14.033 "raid_level": "raid1", 00:18:14.033 "superblock": true, 00:18:14.033 "num_base_bdevs": 2, 00:18:14.033 "num_base_bdevs_discovered": 2, 00:18:14.033 "num_base_bdevs_operational": 2, 00:18:14.033 "base_bdevs_list": [ 00:18:14.033 { 00:18:14.033 "name": "BaseBdev1", 00:18:14.033 "uuid": "3f26f9ea-6062-4eb6-b90c-2de41088db7b", 00:18:14.033 "is_configured": true, 00:18:14.033 "data_offset": 256, 00:18:14.033 "data_size": 7936 00:18:14.033 }, 00:18:14.033 { 00:18:14.033 "name": "BaseBdev2", 00:18:14.033 "uuid": "122e7101-2258-464b-bc3d-ba64c423712c", 00:18:14.033 "is_configured": true, 00:18:14.033 "data_offset": 256, 00:18:14.033 "data_size": 7936 00:18:14.033 } 00:18:14.033 ] 00:18:14.033 } 00:18:14.033 } 00:18:14.033 }' 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:14.033 BaseBdev2' 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.033 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.292 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:14.292 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:14.292 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:14.292 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.292 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.293 [2024-11-26 19:05:05.422110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.293 "name": "Existed_Raid", 00:18:14.293 "uuid": "280d4e1b-d766-420b-8197-918fcf0555e3", 00:18:14.293 "strip_size_kb": 0, 00:18:14.293 "state": "online", 00:18:14.293 "raid_level": "raid1", 00:18:14.293 "superblock": true, 00:18:14.293 "num_base_bdevs": 2, 00:18:14.293 "num_base_bdevs_discovered": 1, 00:18:14.293 "num_base_bdevs_operational": 1, 00:18:14.293 "base_bdevs_list": [ 00:18:14.293 { 00:18:14.293 "name": null, 00:18:14.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.293 "is_configured": false, 00:18:14.293 "data_offset": 0, 00:18:14.293 "data_size": 7936 00:18:14.293 }, 00:18:14.293 { 00:18:14.293 "name": "BaseBdev2", 00:18:14.293 "uuid": "122e7101-2258-464b-bc3d-ba64c423712c", 00:18:14.293 "is_configured": true, 00:18:14.293 "data_offset": 256, 00:18:14.293 "data_size": 7936 00:18:14.293 } 00:18:14.293 ] 00:18:14.293 }' 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.293 19:05:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.861 [2024-11-26 19:05:06.099345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:14.861 [2024-11-26 19:05:06.099478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.861 [2024-11-26 19:05:06.187371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.861 [2024-11-26 19:05:06.187445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.861 [2024-11-26 19:05:06.187467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.861 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86402 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86402 ']' 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86402 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86402 00:18:15.121 killing process with pid 86402 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86402' 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86402 00:18:15.121 [2024-11-26 19:05:06.288848] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.121 19:05:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86402 00:18:15.121 [2024-11-26 19:05:06.303883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.097 19:05:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:16.097 00:18:16.097 real 0m5.580s 00:18:16.097 user 0m8.376s 00:18:16.097 sys 0m0.860s 00:18:16.097 19:05:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.097 19:05:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.097 ************************************ 00:18:16.097 END TEST raid_state_function_test_sb_4k 00:18:16.097 ************************************ 00:18:16.097 19:05:07 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:16.097 19:05:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:16.097 19:05:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.097 19:05:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.097 ************************************ 00:18:16.097 START TEST raid_superblock_test_4k 00:18:16.097 ************************************ 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86656 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86656 00:18:16.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86656 ']' 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.097 19:05:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.356 [2024-11-26 19:05:07.539474] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:18:16.356 [2024-11-26 19:05:07.539693] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86656 ] 00:18:16.614 [2024-11-26 19:05:07.725693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.614 [2024-11-26 19:05:07.854280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.890 [2024-11-26 19:05:08.056200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.890 [2024-11-26 19:05:08.056276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.149 malloc1 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.149 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.409 [2024-11-26 19:05:08.516883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:17.409 [2024-11-26 19:05:08.517113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.409 [2024-11-26 19:05:08.517272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:17.409 [2024-11-26 19:05:08.517430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.409 [2024-11-26 19:05:08.520487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.409 [2024-11-26 19:05:08.520649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:17.409 pt1 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.409 malloc2 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.409 [2024-11-26 19:05:08.572880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:17.409 [2024-11-26 19:05:08.572960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.409 [2024-11-26 19:05:08.573000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:17.409 [2024-11-26 19:05:08.573016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.409 [2024-11-26 19:05:08.575812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.409 [2024-11-26 19:05:08.575870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:17.409 pt2 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.409 [2024-11-26 19:05:08.580936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:17.409 [2024-11-26 19:05:08.583461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.409 [2024-11-26 19:05:08.583809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:17.409 [2024-11-26 19:05:08.583966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:17.409 [2024-11-26 19:05:08.584334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:17.409 [2024-11-26 19:05:08.584659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:17.409 [2024-11-26 19:05:08.584798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:17.409 [2024-11-26 19:05:08.585150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.409 "name": "raid_bdev1", 00:18:17.409 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:17.409 "strip_size_kb": 0, 00:18:17.409 "state": "online", 00:18:17.409 "raid_level": "raid1", 00:18:17.409 "superblock": true, 00:18:17.409 "num_base_bdevs": 2, 00:18:17.409 "num_base_bdevs_discovered": 2, 00:18:17.409 "num_base_bdevs_operational": 2, 00:18:17.409 "base_bdevs_list": [ 00:18:17.409 { 00:18:17.409 "name": "pt1", 00:18:17.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.409 "is_configured": true, 00:18:17.409 "data_offset": 256, 00:18:17.409 "data_size": 7936 00:18:17.409 }, 00:18:17.409 { 00:18:17.409 "name": "pt2", 00:18:17.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.409 "is_configured": true, 00:18:17.409 "data_offset": 256, 00:18:17.409 "data_size": 7936 00:18:17.409 } 00:18:17.409 ] 00:18:17.409 }' 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.409 19:05:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:17.975 [2024-11-26 19:05:09.077606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:17.975 "name": "raid_bdev1", 00:18:17.975 "aliases": [ 00:18:17.975 "dd6554c6-125e-4bc8-859d-20c38411e6e4" 00:18:17.975 ], 00:18:17.975 "product_name": "Raid Volume", 00:18:17.975 "block_size": 4096, 00:18:17.975 "num_blocks": 7936, 00:18:17.975 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:17.975 "assigned_rate_limits": { 00:18:17.975 "rw_ios_per_sec": 0, 00:18:17.975 "rw_mbytes_per_sec": 0, 00:18:17.975 "r_mbytes_per_sec": 0, 00:18:17.975 "w_mbytes_per_sec": 0 00:18:17.975 }, 00:18:17.975 "claimed": false, 00:18:17.975 "zoned": false, 00:18:17.975 "supported_io_types": { 00:18:17.975 "read": true, 00:18:17.975 "write": true, 00:18:17.975 "unmap": false, 00:18:17.975 "flush": false, 00:18:17.975 "reset": true, 00:18:17.975 "nvme_admin": false, 00:18:17.975 "nvme_io": false, 00:18:17.975 "nvme_io_md": false, 00:18:17.975 "write_zeroes": true, 00:18:17.975 "zcopy": false, 00:18:17.975 "get_zone_info": false, 00:18:17.975 "zone_management": false, 00:18:17.975 "zone_append": false, 00:18:17.975 "compare": false, 00:18:17.975 "compare_and_write": false, 00:18:17.975 "abort": false, 00:18:17.975 "seek_hole": false, 00:18:17.975 "seek_data": false, 00:18:17.975 "copy": false, 00:18:17.975 "nvme_iov_md": false 00:18:17.975 }, 00:18:17.975 "memory_domains": [ 00:18:17.975 { 00:18:17.975 "dma_device_id": "system", 00:18:17.975 "dma_device_type": 1 00:18:17.975 }, 00:18:17.975 { 00:18:17.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.975 "dma_device_type": 2 00:18:17.975 }, 00:18:17.975 { 00:18:17.975 "dma_device_id": "system", 00:18:17.975 "dma_device_type": 1 00:18:17.975 }, 00:18:17.975 { 00:18:17.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.975 "dma_device_type": 2 00:18:17.975 } 00:18:17.975 ], 00:18:17.975 "driver_specific": { 00:18:17.975 "raid": { 00:18:17.975 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:17.975 "strip_size_kb": 0, 00:18:17.975 "state": "online", 00:18:17.975 "raid_level": "raid1", 00:18:17.975 "superblock": true, 00:18:17.975 "num_base_bdevs": 2, 00:18:17.975 "num_base_bdevs_discovered": 2, 00:18:17.975 "num_base_bdevs_operational": 2, 00:18:17.975 "base_bdevs_list": [ 00:18:17.975 { 00:18:17.975 "name": "pt1", 00:18:17.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.975 "is_configured": true, 00:18:17.975 "data_offset": 256, 00:18:17.975 "data_size": 7936 00:18:17.975 }, 00:18:17.975 { 00:18:17.975 "name": "pt2", 00:18:17.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.975 "is_configured": true, 00:18:17.975 "data_offset": 256, 00:18:17.975 "data_size": 7936 00:18:17.975 } 00:18:17.975 ] 00:18:17.975 } 00:18:17.975 } 00:18:17.975 }' 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:17.975 pt2' 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.975 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:18.233 [2024-11-26 19:05:09.341689] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.233 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.233 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dd6554c6-125e-4bc8-859d-20c38411e6e4 00:18:18.233 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z dd6554c6-125e-4bc8-859d-20c38411e6e4 ']' 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.234 [2024-11-26 19:05:09.405295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.234 [2024-11-26 19:05:09.405454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.234 [2024-11-26 19:05:09.405672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.234 [2024-11-26 19:05:09.405876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.234 [2024-11-26 19:05:09.405924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.234 [2024-11-26 19:05:09.545453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:18.234 [2024-11-26 19:05:09.548268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:18.234 [2024-11-26 19:05:09.548382] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:18.234 [2024-11-26 19:05:09.548491] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:18.234 [2024-11-26 19:05:09.548516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.234 [2024-11-26 19:05:09.548531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:18.234 request: 00:18:18.234 { 00:18:18.234 "name": "raid_bdev1", 00:18:18.234 "raid_level": "raid1", 00:18:18.234 "base_bdevs": [ 00:18:18.234 "malloc1", 00:18:18.234 "malloc2" 00:18:18.234 ], 00:18:18.234 "superblock": false, 00:18:18.234 "method": "bdev_raid_create", 00:18:18.234 "req_id": 1 00:18:18.234 } 00:18:18.234 Got JSON-RPC error response 00:18:18.234 response: 00:18:18.234 { 00:18:18.234 "code": -17, 00:18:18.234 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:18.234 } 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:18.234 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.492 [2024-11-26 19:05:09.613437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.492 [2024-11-26 19:05:09.613696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.492 [2024-11-26 19:05:09.613771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:18.492 [2024-11-26 19:05:09.613998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.492 [2024-11-26 19:05:09.617147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.492 [2024-11-26 19:05:09.617205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.492 [2024-11-26 19:05:09.617332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:18.492 [2024-11-26 19:05:09.617406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.492 pt1 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.492 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.493 "name": "raid_bdev1", 00:18:18.493 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:18.493 "strip_size_kb": 0, 00:18:18.493 "state": "configuring", 00:18:18.493 "raid_level": "raid1", 00:18:18.493 "superblock": true, 00:18:18.493 "num_base_bdevs": 2, 00:18:18.493 "num_base_bdevs_discovered": 1, 00:18:18.493 "num_base_bdevs_operational": 2, 00:18:18.493 "base_bdevs_list": [ 00:18:18.493 { 00:18:18.493 "name": "pt1", 00:18:18.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.493 "is_configured": true, 00:18:18.493 "data_offset": 256, 00:18:18.493 "data_size": 7936 00:18:18.493 }, 00:18:18.493 { 00:18:18.493 "name": null, 00:18:18.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.493 "is_configured": false, 00:18:18.493 "data_offset": 256, 00:18:18.493 "data_size": 7936 00:18:18.493 } 00:18:18.493 ] 00:18:18.493 }' 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.493 19:05:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.060 [2024-11-26 19:05:10.157640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:19.060 [2024-11-26 19:05:10.157876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.060 [2024-11-26 19:05:10.157969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:19.060 [2024-11-26 19:05:10.158160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.060 [2024-11-26 19:05:10.158777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.060 [2024-11-26 19:05:10.158832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:19.060 [2024-11-26 19:05:10.159080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:19.060 [2024-11-26 19:05:10.159163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:19.060 [2024-11-26 19:05:10.159449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:19.060 [2024-11-26 19:05:10.159580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:19.060 [2024-11-26 19:05:10.159964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:19.060 [2024-11-26 19:05:10.160285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:19.060 [2024-11-26 19:05:10.160414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:19.060 [2024-11-26 19:05:10.160727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.060 pt2 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.060 "name": "raid_bdev1", 00:18:19.060 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:19.060 "strip_size_kb": 0, 00:18:19.060 "state": "online", 00:18:19.060 "raid_level": "raid1", 00:18:19.060 "superblock": true, 00:18:19.060 "num_base_bdevs": 2, 00:18:19.060 "num_base_bdevs_discovered": 2, 00:18:19.060 "num_base_bdevs_operational": 2, 00:18:19.060 "base_bdevs_list": [ 00:18:19.060 { 00:18:19.060 "name": "pt1", 00:18:19.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.060 "is_configured": true, 00:18:19.060 "data_offset": 256, 00:18:19.060 "data_size": 7936 00:18:19.060 }, 00:18:19.060 { 00:18:19.060 "name": "pt2", 00:18:19.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.060 "is_configured": true, 00:18:19.060 "data_offset": 256, 00:18:19.060 "data_size": 7936 00:18:19.060 } 00:18:19.060 ] 00:18:19.060 }' 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.060 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.628 [2024-11-26 19:05:10.698069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.628 "name": "raid_bdev1", 00:18:19.628 "aliases": [ 00:18:19.628 "dd6554c6-125e-4bc8-859d-20c38411e6e4" 00:18:19.628 ], 00:18:19.628 "product_name": "Raid Volume", 00:18:19.628 "block_size": 4096, 00:18:19.628 "num_blocks": 7936, 00:18:19.628 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:19.628 "assigned_rate_limits": { 00:18:19.628 "rw_ios_per_sec": 0, 00:18:19.628 "rw_mbytes_per_sec": 0, 00:18:19.628 "r_mbytes_per_sec": 0, 00:18:19.628 "w_mbytes_per_sec": 0 00:18:19.628 }, 00:18:19.628 "claimed": false, 00:18:19.628 "zoned": false, 00:18:19.628 "supported_io_types": { 00:18:19.628 "read": true, 00:18:19.628 "write": true, 00:18:19.628 "unmap": false, 00:18:19.628 "flush": false, 00:18:19.628 "reset": true, 00:18:19.628 "nvme_admin": false, 00:18:19.628 "nvme_io": false, 00:18:19.628 "nvme_io_md": false, 00:18:19.628 "write_zeroes": true, 00:18:19.628 "zcopy": false, 00:18:19.628 "get_zone_info": false, 00:18:19.628 "zone_management": false, 00:18:19.628 "zone_append": false, 00:18:19.628 "compare": false, 00:18:19.628 "compare_and_write": false, 00:18:19.628 "abort": false, 00:18:19.628 "seek_hole": false, 00:18:19.628 "seek_data": false, 00:18:19.628 "copy": false, 00:18:19.628 "nvme_iov_md": false 00:18:19.628 }, 00:18:19.628 "memory_domains": [ 00:18:19.628 { 00:18:19.628 "dma_device_id": "system", 00:18:19.628 "dma_device_type": 1 00:18:19.628 }, 00:18:19.628 { 00:18:19.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.628 "dma_device_type": 2 00:18:19.628 }, 00:18:19.628 { 00:18:19.628 "dma_device_id": "system", 00:18:19.628 "dma_device_type": 1 00:18:19.628 }, 00:18:19.628 { 00:18:19.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.628 "dma_device_type": 2 00:18:19.628 } 00:18:19.628 ], 00:18:19.628 "driver_specific": { 00:18:19.628 "raid": { 00:18:19.628 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:19.628 "strip_size_kb": 0, 00:18:19.628 "state": "online", 00:18:19.628 "raid_level": "raid1", 00:18:19.628 "superblock": true, 00:18:19.628 "num_base_bdevs": 2, 00:18:19.628 "num_base_bdevs_discovered": 2, 00:18:19.628 "num_base_bdevs_operational": 2, 00:18:19.628 "base_bdevs_list": [ 00:18:19.628 { 00:18:19.628 "name": "pt1", 00:18:19.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.628 "is_configured": true, 00:18:19.628 "data_offset": 256, 00:18:19.628 "data_size": 7936 00:18:19.628 }, 00:18:19.628 { 00:18:19.628 "name": "pt2", 00:18:19.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.628 "is_configured": true, 00:18:19.628 "data_offset": 256, 00:18:19.628 "data_size": 7936 00:18:19.628 } 00:18:19.628 ] 00:18:19.628 } 00:18:19.628 } 00:18:19.628 }' 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:19.628 pt2' 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:19.628 [2024-11-26 19:05:10.966153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.628 19:05:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' dd6554c6-125e-4bc8-859d-20c38411e6e4 '!=' dd6554c6-125e-4bc8-859d-20c38411e6e4 ']' 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.887 [2024-11-26 19:05:11.017889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.887 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.888 "name": "raid_bdev1", 00:18:19.888 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:19.888 "strip_size_kb": 0, 00:18:19.888 "state": "online", 00:18:19.888 "raid_level": "raid1", 00:18:19.888 "superblock": true, 00:18:19.888 "num_base_bdevs": 2, 00:18:19.888 "num_base_bdevs_discovered": 1, 00:18:19.888 "num_base_bdevs_operational": 1, 00:18:19.888 "base_bdevs_list": [ 00:18:19.888 { 00:18:19.888 "name": null, 00:18:19.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.888 "is_configured": false, 00:18:19.888 "data_offset": 0, 00:18:19.888 "data_size": 7936 00:18:19.888 }, 00:18:19.888 { 00:18:19.888 "name": "pt2", 00:18:19.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.888 "is_configured": true, 00:18:19.888 "data_offset": 256, 00:18:19.888 "data_size": 7936 00:18:19.888 } 00:18:19.888 ] 00:18:19.888 }' 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.888 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.455 [2024-11-26 19:05:11.534005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.455 [2024-11-26 19:05:11.535276] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.455 [2024-11-26 19:05:11.535409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.455 [2024-11-26 19:05:11.535477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.455 [2024-11-26 19:05:11.535496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.455 [2024-11-26 19:05:11.609994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.455 [2024-11-26 19:05:11.610195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.455 [2024-11-26 19:05:11.610262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:20.455 [2024-11-26 19:05:11.610401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.455 [2024-11-26 19:05:11.613480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.455 [2024-11-26 19:05:11.613644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.455 [2024-11-26 19:05:11.613890] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.455 [2024-11-26 19:05:11.614084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.455 [2024-11-26 19:05:11.614348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:20.455 [2024-11-26 19:05:11.614380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:20.455 [2024-11-26 19:05:11.614668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:20.455 [2024-11-26 19:05:11.614870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:20.455 [2024-11-26 19:05:11.614886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:20.455 pt2 00:18:20.455 [2024-11-26 19:05:11.615118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.455 "name": "raid_bdev1", 00:18:20.455 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:20.455 "strip_size_kb": 0, 00:18:20.455 "state": "online", 00:18:20.455 "raid_level": "raid1", 00:18:20.455 "superblock": true, 00:18:20.455 "num_base_bdevs": 2, 00:18:20.455 "num_base_bdevs_discovered": 1, 00:18:20.455 "num_base_bdevs_operational": 1, 00:18:20.455 "base_bdevs_list": [ 00:18:20.455 { 00:18:20.455 "name": null, 00:18:20.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.455 "is_configured": false, 00:18:20.455 "data_offset": 256, 00:18:20.455 "data_size": 7936 00:18:20.455 }, 00:18:20.455 { 00:18:20.455 "name": "pt2", 00:18:20.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.455 "is_configured": true, 00:18:20.455 "data_offset": 256, 00:18:20.455 "data_size": 7936 00:18:20.455 } 00:18:20.455 ] 00:18:20.455 }' 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.455 19:05:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.024 [2024-11-26 19:05:12.186605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.024 [2024-11-26 19:05:12.186644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.024 [2024-11-26 19:05:12.186738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.024 [2024-11-26 19:05:12.186810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.024 [2024-11-26 19:05:12.186825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.024 [2024-11-26 19:05:12.250655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:21.024 [2024-11-26 19:05:12.250874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.024 [2024-11-26 19:05:12.250968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:21.024 [2024-11-26 19:05:12.251096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.024 [2024-11-26 19:05:12.254186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.024 [2024-11-26 19:05:12.254340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:21.024 [2024-11-26 19:05:12.254574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:21.024 [2024-11-26 19:05:12.254741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:21.024 [2024-11-26 19:05:12.255160] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:21.024 [2024-11-26 19:05:12.255319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.024 pt1 00:18:21.024 [2024-11-26 19:05:12.255460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:21.024 [2024-11-26 19:05:12.255663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.024 [2024-11-26 19:05:12.255912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:21.024 [2024-11-26 19:05:12.255931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:21.024 [2024-11-26 19:05:12.256255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:21.024 [2024-11-26 19:05:12.256559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:21.024 [2024-11-26 19:05:12.256676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.024 _bdev 0x617000008900 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.024 [2024-11-26 19:05:12.257093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.024 "name": "raid_bdev1", 00:18:21.024 "uuid": "dd6554c6-125e-4bc8-859d-20c38411e6e4", 00:18:21.024 "strip_size_kb": 0, 00:18:21.024 "state": "online", 00:18:21.024 "raid_level": "raid1", 00:18:21.024 "superblock": true, 00:18:21.024 "num_base_bdevs": 2, 00:18:21.024 "num_base_bdevs_discovered": 1, 00:18:21.024 "num_base_bdevs_operational": 1, 00:18:21.024 "base_bdevs_list": [ 00:18:21.024 { 00:18:21.024 "name": null, 00:18:21.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.024 "is_configured": false, 00:18:21.024 "data_offset": 256, 00:18:21.024 "data_size": 7936 00:18:21.024 }, 00:18:21.024 { 00:18:21.024 "name": "pt2", 00:18:21.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.024 "is_configured": true, 00:18:21.024 "data_offset": 256, 00:18:21.024 "data_size": 7936 00:18:21.024 } 00:18:21.024 ] 00:18:21.024 }' 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.024 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.592 [2024-11-26 19:05:12.839114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' dd6554c6-125e-4bc8-859d-20c38411e6e4 '!=' dd6554c6-125e-4bc8-859d-20c38411e6e4 ']' 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86656 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86656 ']' 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86656 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86656 00:18:21.592 killing process with pid 86656 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86656' 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86656 00:18:21.592 [2024-11-26 19:05:12.899169] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.592 19:05:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86656 00:18:21.592 [2024-11-26 19:05:12.899290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.592 [2024-11-26 19:05:12.899365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.592 [2024-11-26 19:05:12.899387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:21.850 [2024-11-26 19:05:13.086958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.785 19:05:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:22.785 00:18:22.785 real 0m6.710s 00:18:22.785 user 0m10.615s 00:18:22.785 sys 0m0.978s 00:18:22.785 19:05:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.785 19:05:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.785 ************************************ 00:18:22.785 END TEST raid_superblock_test_4k 00:18:22.785 ************************************ 00:18:23.044 19:05:14 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:23.044 19:05:14 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:23.044 19:05:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:23.044 19:05:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.044 19:05:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.044 ************************************ 00:18:23.044 START TEST raid_rebuild_test_sb_4k 00:18:23.044 ************************************ 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:23.044 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86984 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86984 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86984 ']' 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.045 19:05:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.045 [2024-11-26 19:05:14.294634] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:18:23.045 [2024-11-26 19:05:14.295003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86984 ] 00:18:23.045 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:23.045 Zero copy mechanism will not be used. 00:18:23.303 [2024-11-26 19:05:14.470284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.303 [2024-11-26 19:05:14.599802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.561 [2024-11-26 19:05:14.806782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.561 [2024-11-26 19:05:14.807091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.130 BaseBdev1_malloc 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.130 [2024-11-26 19:05:15.357511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:24.130 [2024-11-26 19:05:15.357775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.130 [2024-11-26 19:05:15.357853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:24.130 [2024-11-26 19:05:15.358062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.130 [2024-11-26 19:05:15.360984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.130 [2024-11-26 19:05:15.361033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:24.130 BaseBdev1 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.130 BaseBdev2_malloc 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.130 [2024-11-26 19:05:15.410824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:24.130 [2024-11-26 19:05:15.411050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.130 [2024-11-26 19:05:15.411196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:24.130 [2024-11-26 19:05:15.411332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.130 [2024-11-26 19:05:15.414269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.130 [2024-11-26 19:05:15.414317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:24.130 BaseBdev2 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.130 spare_malloc 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.130 spare_delay 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.130 [2024-11-26 19:05:15.489928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:24.130 [2024-11-26 19:05:15.490139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.130 [2024-11-26 19:05:15.490295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:24.130 [2024-11-26 19:05:15.490448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.130 [2024-11-26 19:05:15.493577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.130 [2024-11-26 19:05:15.493749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:24.130 spare 00:18:24.130 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.389 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:24.389 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.389 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.389 [2024-11-26 19:05:15.498110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.389 [2024-11-26 19:05:15.500721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.389 [2024-11-26 19:05:15.501115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:24.389 [2024-11-26 19:05:15.501253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:24.389 [2024-11-26 19:05:15.501670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:24.389 [2024-11-26 19:05:15.501929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:24.389 [2024-11-26 19:05:15.501948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:24.389 [2024-11-26 19:05:15.502187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.389 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.389 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:24.389 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.390 "name": "raid_bdev1", 00:18:24.390 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:24.390 "strip_size_kb": 0, 00:18:24.390 "state": "online", 00:18:24.390 "raid_level": "raid1", 00:18:24.390 "superblock": true, 00:18:24.390 "num_base_bdevs": 2, 00:18:24.390 "num_base_bdevs_discovered": 2, 00:18:24.390 "num_base_bdevs_operational": 2, 00:18:24.390 "base_bdevs_list": [ 00:18:24.390 { 00:18:24.390 "name": "BaseBdev1", 00:18:24.390 "uuid": "44142bf8-7b15-5002-9e96-f9f30b449bd3", 00:18:24.390 "is_configured": true, 00:18:24.390 "data_offset": 256, 00:18:24.390 "data_size": 7936 00:18:24.390 }, 00:18:24.390 { 00:18:24.390 "name": "BaseBdev2", 00:18:24.390 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:24.390 "is_configured": true, 00:18:24.390 "data_offset": 256, 00:18:24.390 "data_size": 7936 00:18:24.390 } 00:18:24.390 ] 00:18:24.390 }' 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.390 19:05:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.648 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.649 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:24.649 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.649 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.649 [2024-11-26 19:05:16.010736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:24.908 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:25.168 [2024-11-26 19:05:16.366574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:25.168 /dev/nbd0 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:25.168 1+0 records in 00:18:25.168 1+0 records out 00:18:25.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596035 s, 6.9 MB/s 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:25.168 19:05:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:26.105 7936+0 records in 00:18:26.105 7936+0 records out 00:18:26.105 32505856 bytes (33 MB, 31 MiB) copied, 0.956823 s, 34.0 MB/s 00:18:26.105 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:26.105 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:26.105 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:26.105 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:26.105 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:26.105 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.105 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:26.368 [2024-11-26 19:05:17.715379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.368 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.368 [2024-11-26 19:05:17.728160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.642 "name": "raid_bdev1", 00:18:26.642 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:26.642 "strip_size_kb": 0, 00:18:26.642 "state": "online", 00:18:26.642 "raid_level": "raid1", 00:18:26.642 "superblock": true, 00:18:26.642 "num_base_bdevs": 2, 00:18:26.642 "num_base_bdevs_discovered": 1, 00:18:26.642 "num_base_bdevs_operational": 1, 00:18:26.642 "base_bdevs_list": [ 00:18:26.642 { 00:18:26.642 "name": null, 00:18:26.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.642 "is_configured": false, 00:18:26.642 "data_offset": 0, 00:18:26.642 "data_size": 7936 00:18:26.642 }, 00:18:26.642 { 00:18:26.642 "name": "BaseBdev2", 00:18:26.642 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:26.642 "is_configured": true, 00:18:26.642 "data_offset": 256, 00:18:26.642 "data_size": 7936 00:18:26.642 } 00:18:26.642 ] 00:18:26.642 }' 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.642 19:05:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.900 19:05:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:26.900 19:05:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.900 19:05:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.900 [2024-11-26 19:05:18.240316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.900 [2024-11-26 19:05:18.256781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:26.900 19:05:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.901 19:05:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:26.901 [2024-11-26 19:05:18.259403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.277 "name": "raid_bdev1", 00:18:28.277 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:28.277 "strip_size_kb": 0, 00:18:28.277 "state": "online", 00:18:28.277 "raid_level": "raid1", 00:18:28.277 "superblock": true, 00:18:28.277 "num_base_bdevs": 2, 00:18:28.277 "num_base_bdevs_discovered": 2, 00:18:28.277 "num_base_bdevs_operational": 2, 00:18:28.277 "process": { 00:18:28.277 "type": "rebuild", 00:18:28.277 "target": "spare", 00:18:28.277 "progress": { 00:18:28.277 "blocks": 2560, 00:18:28.277 "percent": 32 00:18:28.277 } 00:18:28.277 }, 00:18:28.277 "base_bdevs_list": [ 00:18:28.277 { 00:18:28.277 "name": "spare", 00:18:28.277 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:28.277 "is_configured": true, 00:18:28.277 "data_offset": 256, 00:18:28.277 "data_size": 7936 00:18:28.277 }, 00:18:28.277 { 00:18:28.277 "name": "BaseBdev2", 00:18:28.277 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:28.277 "is_configured": true, 00:18:28.277 "data_offset": 256, 00:18:28.277 "data_size": 7936 00:18:28.277 } 00:18:28.277 ] 00:18:28.277 }' 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.277 [2024-11-26 19:05:19.400810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.277 [2024-11-26 19:05:19.468875] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:28.277 [2024-11-26 19:05:19.469212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.277 [2024-11-26 19:05:19.469242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.277 [2024-11-26 19:05:19.469259] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.277 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.278 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.278 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.278 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.278 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.278 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.278 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.278 "name": "raid_bdev1", 00:18:28.278 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:28.278 "strip_size_kb": 0, 00:18:28.278 "state": "online", 00:18:28.278 "raid_level": "raid1", 00:18:28.278 "superblock": true, 00:18:28.278 "num_base_bdevs": 2, 00:18:28.278 "num_base_bdevs_discovered": 1, 00:18:28.278 "num_base_bdevs_operational": 1, 00:18:28.278 "base_bdevs_list": [ 00:18:28.278 { 00:18:28.278 "name": null, 00:18:28.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.278 "is_configured": false, 00:18:28.278 "data_offset": 0, 00:18:28.278 "data_size": 7936 00:18:28.278 }, 00:18:28.278 { 00:18:28.278 "name": "BaseBdev2", 00:18:28.278 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:28.278 "is_configured": true, 00:18:28.278 "data_offset": 256, 00:18:28.278 "data_size": 7936 00:18:28.278 } 00:18:28.278 ] 00:18:28.278 }' 00:18:28.278 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.278 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.845 19:05:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.845 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.845 "name": "raid_bdev1", 00:18:28.845 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:28.845 "strip_size_kb": 0, 00:18:28.845 "state": "online", 00:18:28.845 "raid_level": "raid1", 00:18:28.845 "superblock": true, 00:18:28.845 "num_base_bdevs": 2, 00:18:28.845 "num_base_bdevs_discovered": 1, 00:18:28.845 "num_base_bdevs_operational": 1, 00:18:28.845 "base_bdevs_list": [ 00:18:28.845 { 00:18:28.845 "name": null, 00:18:28.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.845 "is_configured": false, 00:18:28.845 "data_offset": 0, 00:18:28.845 "data_size": 7936 00:18:28.845 }, 00:18:28.845 { 00:18:28.845 "name": "BaseBdev2", 00:18:28.846 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:28.846 "is_configured": true, 00:18:28.846 "data_offset": 256, 00:18:28.846 "data_size": 7936 00:18:28.846 } 00:18:28.846 ] 00:18:28.846 }' 00:18:28.846 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.846 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.846 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.846 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.846 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.846 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.846 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.846 [2024-11-26 19:05:20.125942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.846 [2024-11-26 19:05:20.142557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:28.846 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.846 19:05:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:28.846 [2024-11-26 19:05:20.145350] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.220 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.220 "name": "raid_bdev1", 00:18:30.220 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:30.220 "strip_size_kb": 0, 00:18:30.220 "state": "online", 00:18:30.220 "raid_level": "raid1", 00:18:30.220 "superblock": true, 00:18:30.220 "num_base_bdevs": 2, 00:18:30.220 "num_base_bdevs_discovered": 2, 00:18:30.220 "num_base_bdevs_operational": 2, 00:18:30.220 "process": { 00:18:30.220 "type": "rebuild", 00:18:30.220 "target": "spare", 00:18:30.220 "progress": { 00:18:30.220 "blocks": 2560, 00:18:30.220 "percent": 32 00:18:30.220 } 00:18:30.220 }, 00:18:30.220 "base_bdevs_list": [ 00:18:30.220 { 00:18:30.220 "name": "spare", 00:18:30.220 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:30.220 "is_configured": true, 00:18:30.220 "data_offset": 256, 00:18:30.220 "data_size": 7936 00:18:30.220 }, 00:18:30.220 { 00:18:30.220 "name": "BaseBdev2", 00:18:30.220 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:30.220 "is_configured": true, 00:18:30.220 "data_offset": 256, 00:18:30.220 "data_size": 7936 00:18:30.220 } 00:18:30.220 ] 00:18:30.220 }' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:30.221 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=740 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.221 "name": "raid_bdev1", 00:18:30.221 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:30.221 "strip_size_kb": 0, 00:18:30.221 "state": "online", 00:18:30.221 "raid_level": "raid1", 00:18:30.221 "superblock": true, 00:18:30.221 "num_base_bdevs": 2, 00:18:30.221 "num_base_bdevs_discovered": 2, 00:18:30.221 "num_base_bdevs_operational": 2, 00:18:30.221 "process": { 00:18:30.221 "type": "rebuild", 00:18:30.221 "target": "spare", 00:18:30.221 "progress": { 00:18:30.221 "blocks": 2816, 00:18:30.221 "percent": 35 00:18:30.221 } 00:18:30.221 }, 00:18:30.221 "base_bdevs_list": [ 00:18:30.221 { 00:18:30.221 "name": "spare", 00:18:30.221 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:30.221 "is_configured": true, 00:18:30.221 "data_offset": 256, 00:18:30.221 "data_size": 7936 00:18:30.221 }, 00:18:30.221 { 00:18:30.221 "name": "BaseBdev2", 00:18:30.221 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:30.221 "is_configured": true, 00:18:30.221 "data_offset": 256, 00:18:30.221 "data_size": 7936 00:18:30.221 } 00:18:30.221 ] 00:18:30.221 }' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.221 19:05:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:31.158 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.158 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.158 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.158 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.158 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.159 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.159 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.159 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.159 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.159 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.159 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.159 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.159 "name": "raid_bdev1", 00:18:31.159 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:31.159 "strip_size_kb": 0, 00:18:31.159 "state": "online", 00:18:31.159 "raid_level": "raid1", 00:18:31.159 "superblock": true, 00:18:31.159 "num_base_bdevs": 2, 00:18:31.159 "num_base_bdevs_discovered": 2, 00:18:31.159 "num_base_bdevs_operational": 2, 00:18:31.159 "process": { 00:18:31.159 "type": "rebuild", 00:18:31.159 "target": "spare", 00:18:31.159 "progress": { 00:18:31.159 "blocks": 5888, 00:18:31.159 "percent": 74 00:18:31.159 } 00:18:31.159 }, 00:18:31.159 "base_bdevs_list": [ 00:18:31.159 { 00:18:31.159 "name": "spare", 00:18:31.159 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:31.159 "is_configured": true, 00:18:31.159 "data_offset": 256, 00:18:31.159 "data_size": 7936 00:18:31.159 }, 00:18:31.159 { 00:18:31.159 "name": "BaseBdev2", 00:18:31.159 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:31.159 "is_configured": true, 00:18:31.159 "data_offset": 256, 00:18:31.159 "data_size": 7936 00:18:31.159 } 00:18:31.159 ] 00:18:31.159 }' 00:18:31.159 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.417 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.417 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.417 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.417 19:05:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:32.078 [2024-11-26 19:05:23.269035] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:32.078 [2024-11-26 19:05:23.269139] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:32.078 [2024-11-26 19:05:23.269317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.337 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.596 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.597 "name": "raid_bdev1", 00:18:32.597 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:32.597 "strip_size_kb": 0, 00:18:32.597 "state": "online", 00:18:32.597 "raid_level": "raid1", 00:18:32.597 "superblock": true, 00:18:32.597 "num_base_bdevs": 2, 00:18:32.597 "num_base_bdevs_discovered": 2, 00:18:32.597 "num_base_bdevs_operational": 2, 00:18:32.597 "base_bdevs_list": [ 00:18:32.597 { 00:18:32.597 "name": "spare", 00:18:32.597 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:32.597 "is_configured": true, 00:18:32.597 "data_offset": 256, 00:18:32.597 "data_size": 7936 00:18:32.597 }, 00:18:32.597 { 00:18:32.597 "name": "BaseBdev2", 00:18:32.597 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:32.597 "is_configured": true, 00:18:32.597 "data_offset": 256, 00:18:32.597 "data_size": 7936 00:18:32.597 } 00:18:32.597 ] 00:18:32.597 }' 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.597 "name": "raid_bdev1", 00:18:32.597 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:32.597 "strip_size_kb": 0, 00:18:32.597 "state": "online", 00:18:32.597 "raid_level": "raid1", 00:18:32.597 "superblock": true, 00:18:32.597 "num_base_bdevs": 2, 00:18:32.597 "num_base_bdevs_discovered": 2, 00:18:32.597 "num_base_bdevs_operational": 2, 00:18:32.597 "base_bdevs_list": [ 00:18:32.597 { 00:18:32.597 "name": "spare", 00:18:32.597 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:32.597 "is_configured": true, 00:18:32.597 "data_offset": 256, 00:18:32.597 "data_size": 7936 00:18:32.597 }, 00:18:32.597 { 00:18:32.597 "name": "BaseBdev2", 00:18:32.597 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:32.597 "is_configured": true, 00:18:32.597 "data_offset": 256, 00:18:32.597 "data_size": 7936 00:18:32.597 } 00:18:32.597 ] 00:18:32.597 }' 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.597 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.856 19:05:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.856 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.856 "name": "raid_bdev1", 00:18:32.856 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:32.856 "strip_size_kb": 0, 00:18:32.856 "state": "online", 00:18:32.856 "raid_level": "raid1", 00:18:32.856 "superblock": true, 00:18:32.856 "num_base_bdevs": 2, 00:18:32.856 "num_base_bdevs_discovered": 2, 00:18:32.856 "num_base_bdevs_operational": 2, 00:18:32.856 "base_bdevs_list": [ 00:18:32.856 { 00:18:32.856 "name": "spare", 00:18:32.856 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:32.856 "is_configured": true, 00:18:32.856 "data_offset": 256, 00:18:32.856 "data_size": 7936 00:18:32.856 }, 00:18:32.856 { 00:18:32.856 "name": "BaseBdev2", 00:18:32.856 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:32.856 "is_configured": true, 00:18:32.856 "data_offset": 256, 00:18:32.856 "data_size": 7936 00:18:32.856 } 00:18:32.856 ] 00:18:32.856 }' 00:18:32.856 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.856 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.116 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:33.116 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.116 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.116 [2024-11-26 19:05:24.474120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.116 [2024-11-26 19:05:24.474161] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.116 [2024-11-26 19:05:24.474267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.116 [2024-11-26 19:05:24.474386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.116 [2024-11-26 19:05:24.474405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:33.116 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:33.375 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:33.634 /dev/nbd0 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:33.634 1+0 records in 00:18:33.634 1+0 records out 00:18:33.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435849 s, 9.4 MB/s 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:33.634 19:05:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:33.893 /dev/nbd1 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.152 1+0 records in 00:18:34.152 1+0 records out 00:18:34.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037727 s, 10.9 MB/s 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.152 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.721 19:05:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.721 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.981 [2024-11-26 19:05:26.086560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:34.981 [2024-11-26 19:05:26.086761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.981 [2024-11-26 19:05:26.086811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:34.981 [2024-11-26 19:05:26.086829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.981 [2024-11-26 19:05:26.090070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.981 [2024-11-26 19:05:26.090115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:34.981 [2024-11-26 19:05:26.090239] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:34.981 [2024-11-26 19:05:26.090332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.981 spare 00:18:34.981 [2024-11-26 19:05:26.090560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.981 [2024-11-26 19:05:26.190755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:34.981 [2024-11-26 19:05:26.191082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:34.981 [2024-11-26 19:05:26.191605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:34.981 [2024-11-26 19:05:26.192078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:34.981 [2024-11-26 19:05:26.192231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:34.981 [2024-11-26 19:05:26.192618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.981 "name": "raid_bdev1", 00:18:34.981 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:34.981 "strip_size_kb": 0, 00:18:34.981 "state": "online", 00:18:34.981 "raid_level": "raid1", 00:18:34.981 "superblock": true, 00:18:34.981 "num_base_bdevs": 2, 00:18:34.981 "num_base_bdevs_discovered": 2, 00:18:34.981 "num_base_bdevs_operational": 2, 00:18:34.981 "base_bdevs_list": [ 00:18:34.981 { 00:18:34.981 "name": "spare", 00:18:34.981 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:34.981 "is_configured": true, 00:18:34.981 "data_offset": 256, 00:18:34.981 "data_size": 7936 00:18:34.981 }, 00:18:34.981 { 00:18:34.981 "name": "BaseBdev2", 00:18:34.981 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:34.981 "is_configured": true, 00:18:34.981 "data_offset": 256, 00:18:34.981 "data_size": 7936 00:18:34.981 } 00:18:34.981 ] 00:18:34.981 }' 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.981 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.549 "name": "raid_bdev1", 00:18:35.549 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:35.549 "strip_size_kb": 0, 00:18:35.549 "state": "online", 00:18:35.549 "raid_level": "raid1", 00:18:35.549 "superblock": true, 00:18:35.549 "num_base_bdevs": 2, 00:18:35.549 "num_base_bdevs_discovered": 2, 00:18:35.549 "num_base_bdevs_operational": 2, 00:18:35.549 "base_bdevs_list": [ 00:18:35.549 { 00:18:35.549 "name": "spare", 00:18:35.549 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:35.549 "is_configured": true, 00:18:35.549 "data_offset": 256, 00:18:35.549 "data_size": 7936 00:18:35.549 }, 00:18:35.549 { 00:18:35.549 "name": "BaseBdev2", 00:18:35.549 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:35.549 "is_configured": true, 00:18:35.549 "data_offset": 256, 00:18:35.549 "data_size": 7936 00:18:35.549 } 00:18:35.549 ] 00:18:35.549 }' 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.549 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.808 [2024-11-26 19:05:26.938938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.808 19:05:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.808 19:05:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.808 "name": "raid_bdev1", 00:18:35.808 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:35.808 "strip_size_kb": 0, 00:18:35.808 "state": "online", 00:18:35.808 "raid_level": "raid1", 00:18:35.808 "superblock": true, 00:18:35.808 "num_base_bdevs": 2, 00:18:35.808 "num_base_bdevs_discovered": 1, 00:18:35.808 "num_base_bdevs_operational": 1, 00:18:35.808 "base_bdevs_list": [ 00:18:35.808 { 00:18:35.808 "name": null, 00:18:35.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.808 "is_configured": false, 00:18:35.808 "data_offset": 0, 00:18:35.808 "data_size": 7936 00:18:35.808 }, 00:18:35.808 { 00:18:35.808 "name": "BaseBdev2", 00:18:35.808 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:35.808 "is_configured": true, 00:18:35.808 "data_offset": 256, 00:18:35.808 "data_size": 7936 00:18:35.808 } 00:18:35.808 ] 00:18:35.808 }' 00:18:35.808 19:05:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.808 19:05:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.376 19:05:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:36.376 19:05:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.376 19:05:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.376 [2024-11-26 19:05:27.507153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.376 [2024-11-26 19:05:27.507467] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:36.376 [2024-11-26 19:05:27.507494] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:36.376 [2024-11-26 19:05:27.507554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.376 [2024-11-26 19:05:27.523341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:36.376 19:05:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.376 19:05:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:36.376 [2024-11-26 19:05:27.525906] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.348 "name": "raid_bdev1", 00:18:37.348 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:37.348 "strip_size_kb": 0, 00:18:37.348 "state": "online", 00:18:37.348 "raid_level": "raid1", 00:18:37.348 "superblock": true, 00:18:37.348 "num_base_bdevs": 2, 00:18:37.348 "num_base_bdevs_discovered": 2, 00:18:37.348 "num_base_bdevs_operational": 2, 00:18:37.348 "process": { 00:18:37.348 "type": "rebuild", 00:18:37.348 "target": "spare", 00:18:37.348 "progress": { 00:18:37.348 "blocks": 2560, 00:18:37.348 "percent": 32 00:18:37.348 } 00:18:37.348 }, 00:18:37.348 "base_bdevs_list": [ 00:18:37.348 { 00:18:37.348 "name": "spare", 00:18:37.348 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:37.348 "is_configured": true, 00:18:37.348 "data_offset": 256, 00:18:37.348 "data_size": 7936 00:18:37.348 }, 00:18:37.348 { 00:18:37.348 "name": "BaseBdev2", 00:18:37.348 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:37.348 "is_configured": true, 00:18:37.348 "data_offset": 256, 00:18:37.348 "data_size": 7936 00:18:37.348 } 00:18:37.348 ] 00:18:37.348 }' 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.348 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.348 [2024-11-26 19:05:28.703635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.607 [2024-11-26 19:05:28.735533] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.607 [2024-11-26 19:05:28.735759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.607 [2024-11-26 19:05:28.735787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.607 [2024-11-26 19:05:28.735803] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.607 "name": "raid_bdev1", 00:18:37.607 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:37.607 "strip_size_kb": 0, 00:18:37.607 "state": "online", 00:18:37.607 "raid_level": "raid1", 00:18:37.607 "superblock": true, 00:18:37.607 "num_base_bdevs": 2, 00:18:37.607 "num_base_bdevs_discovered": 1, 00:18:37.607 "num_base_bdevs_operational": 1, 00:18:37.607 "base_bdevs_list": [ 00:18:37.607 { 00:18:37.607 "name": null, 00:18:37.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.607 "is_configured": false, 00:18:37.607 "data_offset": 0, 00:18:37.607 "data_size": 7936 00:18:37.607 }, 00:18:37.607 { 00:18:37.607 "name": "BaseBdev2", 00:18:37.607 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:37.607 "is_configured": true, 00:18:37.607 "data_offset": 256, 00:18:37.607 "data_size": 7936 00:18:37.607 } 00:18:37.607 ] 00:18:37.607 }' 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.607 19:05:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.174 19:05:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:38.174 19:05:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.174 19:05:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.174 [2024-11-26 19:05:29.279825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:38.174 [2024-11-26 19:05:29.280086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.174 [2024-11-26 19:05:29.280242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:38.174 [2024-11-26 19:05:29.280386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.174 [2024-11-26 19:05:29.281060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.174 [2024-11-26 19:05:29.281217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:38.174 [2024-11-26 19:05:29.281462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:38.174 [2024-11-26 19:05:29.281610] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:38.174 [2024-11-26 19:05:29.281744] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:38.174 [2024-11-26 19:05:29.281805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.174 spare 00:18:38.174 [2024-11-26 19:05:29.297759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:38.174 19:05:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.174 19:05:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:38.174 [2024-11-26 19:05:29.300291] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.109 "name": "raid_bdev1", 00:18:39.109 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:39.109 "strip_size_kb": 0, 00:18:39.109 "state": "online", 00:18:39.109 "raid_level": "raid1", 00:18:39.109 "superblock": true, 00:18:39.109 "num_base_bdevs": 2, 00:18:39.109 "num_base_bdevs_discovered": 2, 00:18:39.109 "num_base_bdevs_operational": 2, 00:18:39.109 "process": { 00:18:39.109 "type": "rebuild", 00:18:39.109 "target": "spare", 00:18:39.109 "progress": { 00:18:39.109 "blocks": 2560, 00:18:39.109 "percent": 32 00:18:39.109 } 00:18:39.109 }, 00:18:39.109 "base_bdevs_list": [ 00:18:39.109 { 00:18:39.109 "name": "spare", 00:18:39.109 "uuid": "06cc682d-2581-5d6e-8ce6-ea3fc694cb86", 00:18:39.109 "is_configured": true, 00:18:39.109 "data_offset": 256, 00:18:39.109 "data_size": 7936 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "name": "BaseBdev2", 00:18:39.109 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:39.109 "is_configured": true, 00:18:39.109 "data_offset": 256, 00:18:39.109 "data_size": 7936 00:18:39.109 } 00:18:39.109 ] 00:18:39.109 }' 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.109 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.368 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.368 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:39.368 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.368 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.368 [2024-11-26 19:05:30.490052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.369 [2024-11-26 19:05:30.510019] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:39.369 [2024-11-26 19:05:30.510112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.369 [2024-11-26 19:05:30.510151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.369 [2024-11-26 19:05:30.510169] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.369 "name": "raid_bdev1", 00:18:39.369 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:39.369 "strip_size_kb": 0, 00:18:39.369 "state": "online", 00:18:39.369 "raid_level": "raid1", 00:18:39.369 "superblock": true, 00:18:39.369 "num_base_bdevs": 2, 00:18:39.369 "num_base_bdevs_discovered": 1, 00:18:39.369 "num_base_bdevs_operational": 1, 00:18:39.369 "base_bdevs_list": [ 00:18:39.369 { 00:18:39.369 "name": null, 00:18:39.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.369 "is_configured": false, 00:18:39.369 "data_offset": 0, 00:18:39.369 "data_size": 7936 00:18:39.369 }, 00:18:39.369 { 00:18:39.369 "name": "BaseBdev2", 00:18:39.369 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:39.369 "is_configured": true, 00:18:39.369 "data_offset": 256, 00:18:39.369 "data_size": 7936 00:18:39.369 } 00:18:39.369 ] 00:18:39.369 }' 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.369 19:05:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.936 "name": "raid_bdev1", 00:18:39.936 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:39.936 "strip_size_kb": 0, 00:18:39.936 "state": "online", 00:18:39.936 "raid_level": "raid1", 00:18:39.936 "superblock": true, 00:18:39.936 "num_base_bdevs": 2, 00:18:39.936 "num_base_bdevs_discovered": 1, 00:18:39.936 "num_base_bdevs_operational": 1, 00:18:39.936 "base_bdevs_list": [ 00:18:39.936 { 00:18:39.936 "name": null, 00:18:39.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.936 "is_configured": false, 00:18:39.936 "data_offset": 0, 00:18:39.936 "data_size": 7936 00:18:39.936 }, 00:18:39.936 { 00:18:39.936 "name": "BaseBdev2", 00:18:39.936 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:39.936 "is_configured": true, 00:18:39.936 "data_offset": 256, 00:18:39.936 "data_size": 7936 00:18:39.936 } 00:18:39.936 ] 00:18:39.936 }' 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.936 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.936 [2024-11-26 19:05:31.247647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:39.936 [2024-11-26 19:05:31.247882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.936 [2024-11-26 19:05:31.248063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:39.936 [2024-11-26 19:05:31.248102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.936 [2024-11-26 19:05:31.248721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.936 [2024-11-26 19:05:31.248791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:39.936 [2024-11-26 19:05:31.249054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:39.936 [2024-11-26 19:05:31.249115] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.937 [2024-11-26 19:05:31.249339] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:39.937 [2024-11-26 19:05:31.249463] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:39.937 BaseBdev1 00:18:39.937 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.937 19:05:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.314 "name": "raid_bdev1", 00:18:41.314 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:41.314 "strip_size_kb": 0, 00:18:41.314 "state": "online", 00:18:41.314 "raid_level": "raid1", 00:18:41.314 "superblock": true, 00:18:41.314 "num_base_bdevs": 2, 00:18:41.314 "num_base_bdevs_discovered": 1, 00:18:41.314 "num_base_bdevs_operational": 1, 00:18:41.314 "base_bdevs_list": [ 00:18:41.314 { 00:18:41.314 "name": null, 00:18:41.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.314 "is_configured": false, 00:18:41.314 "data_offset": 0, 00:18:41.314 "data_size": 7936 00:18:41.314 }, 00:18:41.314 { 00:18:41.314 "name": "BaseBdev2", 00:18:41.314 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:41.314 "is_configured": true, 00:18:41.314 "data_offset": 256, 00:18:41.314 "data_size": 7936 00:18:41.314 } 00:18:41.314 ] 00:18:41.314 }' 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.314 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.573 "name": "raid_bdev1", 00:18:41.573 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:41.573 "strip_size_kb": 0, 00:18:41.573 "state": "online", 00:18:41.573 "raid_level": "raid1", 00:18:41.573 "superblock": true, 00:18:41.573 "num_base_bdevs": 2, 00:18:41.573 "num_base_bdevs_discovered": 1, 00:18:41.573 "num_base_bdevs_operational": 1, 00:18:41.573 "base_bdevs_list": [ 00:18:41.573 { 00:18:41.573 "name": null, 00:18:41.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.573 "is_configured": false, 00:18:41.573 "data_offset": 0, 00:18:41.573 "data_size": 7936 00:18:41.573 }, 00:18:41.573 { 00:18:41.573 "name": "BaseBdev2", 00:18:41.573 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:41.573 "is_configured": true, 00:18:41.573 "data_offset": 256, 00:18:41.573 "data_size": 7936 00:18:41.573 } 00:18:41.573 ] 00:18:41.573 }' 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.573 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.832 [2024-11-26 19:05:32.968329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.832 [2024-11-26 19:05:32.968786] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:41.832 [2024-11-26 19:05:32.968832] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:41.832 request: 00:18:41.832 { 00:18:41.832 "base_bdev": "BaseBdev1", 00:18:41.832 "raid_bdev": "raid_bdev1", 00:18:41.832 "method": "bdev_raid_add_base_bdev", 00:18:41.832 "req_id": 1 00:18:41.832 } 00:18:41.832 Got JSON-RPC error response 00:18:41.832 response: 00:18:41.832 { 00:18:41.832 "code": -22, 00:18:41.832 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:41.832 } 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.832 19:05:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.765 19:05:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.765 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.765 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.765 "name": "raid_bdev1", 00:18:42.765 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:42.765 "strip_size_kb": 0, 00:18:42.765 "state": "online", 00:18:42.765 "raid_level": "raid1", 00:18:42.765 "superblock": true, 00:18:42.765 "num_base_bdevs": 2, 00:18:42.765 "num_base_bdevs_discovered": 1, 00:18:42.765 "num_base_bdevs_operational": 1, 00:18:42.765 "base_bdevs_list": [ 00:18:42.765 { 00:18:42.765 "name": null, 00:18:42.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.765 "is_configured": false, 00:18:42.765 "data_offset": 0, 00:18:42.765 "data_size": 7936 00:18:42.765 }, 00:18:42.765 { 00:18:42.765 "name": "BaseBdev2", 00:18:42.765 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:42.765 "is_configured": true, 00:18:42.765 "data_offset": 256, 00:18:42.765 "data_size": 7936 00:18:42.765 } 00:18:42.765 ] 00:18:42.765 }' 00:18:42.765 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.765 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.332 "name": "raid_bdev1", 00:18:43.332 "uuid": "d62a9590-0823-4c76-8754-d3ccba15a7fb", 00:18:43.332 "strip_size_kb": 0, 00:18:43.332 "state": "online", 00:18:43.332 "raid_level": "raid1", 00:18:43.332 "superblock": true, 00:18:43.332 "num_base_bdevs": 2, 00:18:43.332 "num_base_bdevs_discovered": 1, 00:18:43.332 "num_base_bdevs_operational": 1, 00:18:43.332 "base_bdevs_list": [ 00:18:43.332 { 00:18:43.332 "name": null, 00:18:43.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.332 "is_configured": false, 00:18:43.332 "data_offset": 0, 00:18:43.332 "data_size": 7936 00:18:43.332 }, 00:18:43.332 { 00:18:43.332 "name": "BaseBdev2", 00:18:43.332 "uuid": "34fc7faf-f912-5d9b-a568-701a7693e00b", 00:18:43.332 "is_configured": true, 00:18:43.332 "data_offset": 256, 00:18:43.332 "data_size": 7936 00:18:43.332 } 00:18:43.332 ] 00:18:43.332 }' 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86984 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86984 ']' 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86984 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86984 00:18:43.332 killing process with pid 86984 00:18:43.332 Received shutdown signal, test time was about 60.000000 seconds 00:18:43.332 00:18:43.332 Latency(us) 00:18:43.332 [2024-11-26T19:05:34.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.332 [2024-11-26T19:05:34.699Z] =================================================================================================================== 00:18:43.332 [2024-11-26T19:05:34.699Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86984' 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86984 00:18:43.332 [2024-11-26 19:05:34.687829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.332 19:05:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86984 00:18:43.332 [2024-11-26 19:05:34.688040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.332 [2024-11-26 19:05:34.688115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.332 [2024-11-26 19:05:34.688136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:43.900 [2024-11-26 19:05:34.962684] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.860 19:05:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:44.860 00:18:44.860 real 0m21.825s 00:18:44.860 user 0m29.588s 00:18:44.860 sys 0m2.543s 00:18:44.860 19:05:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.860 ************************************ 00:18:44.860 END TEST raid_rebuild_test_sb_4k 00:18:44.860 ************************************ 00:18:44.860 19:05:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.860 19:05:36 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:44.860 19:05:36 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:44.860 19:05:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:44.860 19:05:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.860 19:05:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.860 ************************************ 00:18:44.860 START TEST raid_state_function_test_sb_md_separate 00:18:44.860 ************************************ 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87693 00:18:44.860 Process raid pid: 87693 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87693' 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87693 00:18:44.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87693 ']' 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.860 19:05:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.860 [2024-11-26 19:05:36.175263] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:18:44.860 [2024-11-26 19:05:36.175413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.121 [2024-11-26 19:05:36.352674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.121 [2024-11-26 19:05:36.480844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.380 [2024-11-26 19:05:36.684551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.380 [2024-11-26 19:05:36.684603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.946 [2024-11-26 19:05:37.200830] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:45.946 [2024-11-26 19:05:37.201081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:45.946 [2024-11-26 19:05:37.201112] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:45.946 [2024-11-26 19:05:37.201131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.946 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.947 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.947 "name": "Existed_Raid", 00:18:45.947 "uuid": "2ec46966-2965-4705-94d9-459cbbaf7b09", 00:18:45.947 "strip_size_kb": 0, 00:18:45.947 "state": "configuring", 00:18:45.947 "raid_level": "raid1", 00:18:45.947 "superblock": true, 00:18:45.947 "num_base_bdevs": 2, 00:18:45.947 "num_base_bdevs_discovered": 0, 00:18:45.947 "num_base_bdevs_operational": 2, 00:18:45.947 "base_bdevs_list": [ 00:18:45.947 { 00:18:45.947 "name": "BaseBdev1", 00:18:45.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.947 "is_configured": false, 00:18:45.947 "data_offset": 0, 00:18:45.947 "data_size": 0 00:18:45.947 }, 00:18:45.947 { 00:18:45.947 "name": "BaseBdev2", 00:18:45.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.947 "is_configured": false, 00:18:45.947 "data_offset": 0, 00:18:45.947 "data_size": 0 00:18:45.947 } 00:18:45.947 ] 00:18:45.947 }' 00:18:45.947 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.947 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.514 [2024-11-26 19:05:37.724877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:46.514 [2024-11-26 19:05:37.724949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.514 [2024-11-26 19:05:37.732872] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:46.514 [2024-11-26 19:05:37.733098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:46.514 [2024-11-26 19:05:37.733223] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:46.514 [2024-11-26 19:05:37.733289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.514 [2024-11-26 19:05:37.779454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.514 BaseBdev1 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:46.514 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 [ 00:18:46.515 { 00:18:46.515 "name": "BaseBdev1", 00:18:46.515 "aliases": [ 00:18:46.515 "82aaaab3-ee55-4dc5-ab56-e2424f342be3" 00:18:46.515 ], 00:18:46.515 "product_name": "Malloc disk", 00:18:46.515 "block_size": 4096, 00:18:46.515 "num_blocks": 8192, 00:18:46.515 "uuid": "82aaaab3-ee55-4dc5-ab56-e2424f342be3", 00:18:46.515 "md_size": 32, 00:18:46.515 "md_interleave": false, 00:18:46.515 "dif_type": 0, 00:18:46.515 "assigned_rate_limits": { 00:18:46.515 "rw_ios_per_sec": 0, 00:18:46.515 "rw_mbytes_per_sec": 0, 00:18:46.515 "r_mbytes_per_sec": 0, 00:18:46.515 "w_mbytes_per_sec": 0 00:18:46.515 }, 00:18:46.515 "claimed": true, 00:18:46.515 "claim_type": "exclusive_write", 00:18:46.515 "zoned": false, 00:18:46.515 "supported_io_types": { 00:18:46.515 "read": true, 00:18:46.515 "write": true, 00:18:46.515 "unmap": true, 00:18:46.515 "flush": true, 00:18:46.515 "reset": true, 00:18:46.515 "nvme_admin": false, 00:18:46.515 "nvme_io": false, 00:18:46.515 "nvme_io_md": false, 00:18:46.515 "write_zeroes": true, 00:18:46.515 "zcopy": true, 00:18:46.515 "get_zone_info": false, 00:18:46.515 "zone_management": false, 00:18:46.515 "zone_append": false, 00:18:46.515 "compare": false, 00:18:46.515 "compare_and_write": false, 00:18:46.515 "abort": true, 00:18:46.515 "seek_hole": false, 00:18:46.515 "seek_data": false, 00:18:46.515 "copy": true, 00:18:46.515 "nvme_iov_md": false 00:18:46.515 }, 00:18:46.515 "memory_domains": [ 00:18:46.515 { 00:18:46.515 "dma_device_id": "system", 00:18:46.515 "dma_device_type": 1 00:18:46.515 }, 00:18:46.515 { 00:18:46.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.515 "dma_device_type": 2 00:18:46.515 } 00:18:46.515 ], 00:18:46.515 "driver_specific": {} 00:18:46.515 } 00:18:46.515 ] 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.515 "name": "Existed_Raid", 00:18:46.515 "uuid": "df691f20-8cd2-49c9-ac8d-f306a940e466", 00:18:46.515 "strip_size_kb": 0, 00:18:46.515 "state": "configuring", 00:18:46.515 "raid_level": "raid1", 00:18:46.515 "superblock": true, 00:18:46.515 "num_base_bdevs": 2, 00:18:46.515 "num_base_bdevs_discovered": 1, 00:18:46.515 "num_base_bdevs_operational": 2, 00:18:46.515 "base_bdevs_list": [ 00:18:46.515 { 00:18:46.515 "name": "BaseBdev1", 00:18:46.515 "uuid": "82aaaab3-ee55-4dc5-ab56-e2424f342be3", 00:18:46.515 "is_configured": true, 00:18:46.515 "data_offset": 256, 00:18:46.515 "data_size": 7936 00:18:46.515 }, 00:18:46.515 { 00:18:46.515 "name": "BaseBdev2", 00:18:46.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.515 "is_configured": false, 00:18:46.515 "data_offset": 0, 00:18:46.515 "data_size": 0 00:18:46.515 } 00:18:46.515 ] 00:18:46.515 }' 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.515 19:05:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.082 [2024-11-26 19:05:38.335756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.082 [2024-11-26 19:05:38.335818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.082 [2024-11-26 19:05:38.343776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.082 [2024-11-26 19:05:38.346548] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.082 [2024-11-26 19:05:38.346755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.082 "name": "Existed_Raid", 00:18:47.082 "uuid": "b152a873-8f41-4cee-80a5-7d1eafdad8b7", 00:18:47.082 "strip_size_kb": 0, 00:18:47.082 "state": "configuring", 00:18:47.082 "raid_level": "raid1", 00:18:47.082 "superblock": true, 00:18:47.082 "num_base_bdevs": 2, 00:18:47.082 "num_base_bdevs_discovered": 1, 00:18:47.082 "num_base_bdevs_operational": 2, 00:18:47.082 "base_bdevs_list": [ 00:18:47.082 { 00:18:47.082 "name": "BaseBdev1", 00:18:47.082 "uuid": "82aaaab3-ee55-4dc5-ab56-e2424f342be3", 00:18:47.082 "is_configured": true, 00:18:47.082 "data_offset": 256, 00:18:47.082 "data_size": 7936 00:18:47.082 }, 00:18:47.082 { 00:18:47.082 "name": "BaseBdev2", 00:18:47.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.082 "is_configured": false, 00:18:47.082 "data_offset": 0, 00:18:47.082 "data_size": 0 00:18:47.082 } 00:18:47.082 ] 00:18:47.082 }' 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.082 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.649 [2024-11-26 19:05:38.933811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.649 BaseBdev2 00:18:47.649 [2024-11-26 19:05:38.934315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:47.649 [2024-11-26 19:05:38.934355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:47.649 [2024-11-26 19:05:38.934460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:47.649 [2024-11-26 19:05:38.934630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:47.649 [2024-11-26 19:05:38.934651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:47.649 [2024-11-26 19:05:38.934766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.649 [ 00:18:47.649 { 00:18:47.649 "name": "BaseBdev2", 00:18:47.649 "aliases": [ 00:18:47.649 "47563568-4cfe-4a6f-b428-7024a625e41e" 00:18:47.649 ], 00:18:47.649 "product_name": "Malloc disk", 00:18:47.649 "block_size": 4096, 00:18:47.649 "num_blocks": 8192, 00:18:47.649 "uuid": "47563568-4cfe-4a6f-b428-7024a625e41e", 00:18:47.649 "md_size": 32, 00:18:47.649 "md_interleave": false, 00:18:47.649 "dif_type": 0, 00:18:47.649 "assigned_rate_limits": { 00:18:47.649 "rw_ios_per_sec": 0, 00:18:47.649 "rw_mbytes_per_sec": 0, 00:18:47.649 "r_mbytes_per_sec": 0, 00:18:47.649 "w_mbytes_per_sec": 0 00:18:47.649 }, 00:18:47.649 "claimed": true, 00:18:47.649 "claim_type": "exclusive_write", 00:18:47.649 "zoned": false, 00:18:47.649 "supported_io_types": { 00:18:47.649 "read": true, 00:18:47.649 "write": true, 00:18:47.649 "unmap": true, 00:18:47.649 "flush": true, 00:18:47.649 "reset": true, 00:18:47.649 "nvme_admin": false, 00:18:47.649 "nvme_io": false, 00:18:47.649 "nvme_io_md": false, 00:18:47.649 "write_zeroes": true, 00:18:47.649 "zcopy": true, 00:18:47.649 "get_zone_info": false, 00:18:47.649 "zone_management": false, 00:18:47.649 "zone_append": false, 00:18:47.649 "compare": false, 00:18:47.649 "compare_and_write": false, 00:18:47.649 "abort": true, 00:18:47.649 "seek_hole": false, 00:18:47.649 "seek_data": false, 00:18:47.649 "copy": true, 00:18:47.649 "nvme_iov_md": false 00:18:47.649 }, 00:18:47.649 "memory_domains": [ 00:18:47.649 { 00:18:47.649 "dma_device_id": "system", 00:18:47.649 "dma_device_type": 1 00:18:47.649 }, 00:18:47.649 { 00:18:47.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.649 "dma_device_type": 2 00:18:47.649 } 00:18:47.649 ], 00:18:47.649 "driver_specific": {} 00:18:47.649 } 00:18:47.649 ] 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.649 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.650 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.650 19:05:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.908 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.908 "name": "Existed_Raid", 00:18:47.908 "uuid": "b152a873-8f41-4cee-80a5-7d1eafdad8b7", 00:18:47.908 "strip_size_kb": 0, 00:18:47.908 "state": "online", 00:18:47.908 "raid_level": "raid1", 00:18:47.908 "superblock": true, 00:18:47.908 "num_base_bdevs": 2, 00:18:47.908 "num_base_bdevs_discovered": 2, 00:18:47.908 "num_base_bdevs_operational": 2, 00:18:47.908 "base_bdevs_list": [ 00:18:47.908 { 00:18:47.908 "name": "BaseBdev1", 00:18:47.908 "uuid": "82aaaab3-ee55-4dc5-ab56-e2424f342be3", 00:18:47.908 "is_configured": true, 00:18:47.908 "data_offset": 256, 00:18:47.908 "data_size": 7936 00:18:47.908 }, 00:18:47.908 { 00:18:47.908 "name": "BaseBdev2", 00:18:47.908 "uuid": "47563568-4cfe-4a6f-b428-7024a625e41e", 00:18:47.908 "is_configured": true, 00:18:47.908 "data_offset": 256, 00:18:47.908 "data_size": 7936 00:18:47.908 } 00:18:47.908 ] 00:18:47.908 }' 00:18:47.908 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.908 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.168 [2024-11-26 19:05:39.482555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:48.168 "name": "Existed_Raid", 00:18:48.168 "aliases": [ 00:18:48.168 "b152a873-8f41-4cee-80a5-7d1eafdad8b7" 00:18:48.168 ], 00:18:48.168 "product_name": "Raid Volume", 00:18:48.168 "block_size": 4096, 00:18:48.168 "num_blocks": 7936, 00:18:48.168 "uuid": "b152a873-8f41-4cee-80a5-7d1eafdad8b7", 00:18:48.168 "md_size": 32, 00:18:48.168 "md_interleave": false, 00:18:48.168 "dif_type": 0, 00:18:48.168 "assigned_rate_limits": { 00:18:48.168 "rw_ios_per_sec": 0, 00:18:48.168 "rw_mbytes_per_sec": 0, 00:18:48.168 "r_mbytes_per_sec": 0, 00:18:48.168 "w_mbytes_per_sec": 0 00:18:48.168 }, 00:18:48.168 "claimed": false, 00:18:48.168 "zoned": false, 00:18:48.168 "supported_io_types": { 00:18:48.168 "read": true, 00:18:48.168 "write": true, 00:18:48.168 "unmap": false, 00:18:48.168 "flush": false, 00:18:48.168 "reset": true, 00:18:48.168 "nvme_admin": false, 00:18:48.168 "nvme_io": false, 00:18:48.168 "nvme_io_md": false, 00:18:48.168 "write_zeroes": true, 00:18:48.168 "zcopy": false, 00:18:48.168 "get_zone_info": false, 00:18:48.168 "zone_management": false, 00:18:48.168 "zone_append": false, 00:18:48.168 "compare": false, 00:18:48.168 "compare_and_write": false, 00:18:48.168 "abort": false, 00:18:48.168 "seek_hole": false, 00:18:48.168 "seek_data": false, 00:18:48.168 "copy": false, 00:18:48.168 "nvme_iov_md": false 00:18:48.168 }, 00:18:48.168 "memory_domains": [ 00:18:48.168 { 00:18:48.168 "dma_device_id": "system", 00:18:48.168 "dma_device_type": 1 00:18:48.168 }, 00:18:48.168 { 00:18:48.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.168 "dma_device_type": 2 00:18:48.168 }, 00:18:48.168 { 00:18:48.168 "dma_device_id": "system", 00:18:48.168 "dma_device_type": 1 00:18:48.168 }, 00:18:48.168 { 00:18:48.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.168 "dma_device_type": 2 00:18:48.168 } 00:18:48.168 ], 00:18:48.168 "driver_specific": { 00:18:48.168 "raid": { 00:18:48.168 "uuid": "b152a873-8f41-4cee-80a5-7d1eafdad8b7", 00:18:48.168 "strip_size_kb": 0, 00:18:48.168 "state": "online", 00:18:48.168 "raid_level": "raid1", 00:18:48.168 "superblock": true, 00:18:48.168 "num_base_bdevs": 2, 00:18:48.168 "num_base_bdevs_discovered": 2, 00:18:48.168 "num_base_bdevs_operational": 2, 00:18:48.168 "base_bdevs_list": [ 00:18:48.168 { 00:18:48.168 "name": "BaseBdev1", 00:18:48.168 "uuid": "82aaaab3-ee55-4dc5-ab56-e2424f342be3", 00:18:48.168 "is_configured": true, 00:18:48.168 "data_offset": 256, 00:18:48.168 "data_size": 7936 00:18:48.168 }, 00:18:48.168 { 00:18:48.168 "name": "BaseBdev2", 00:18:48.168 "uuid": "47563568-4cfe-4a6f-b428-7024a625e41e", 00:18:48.168 "is_configured": true, 00:18:48.168 "data_offset": 256, 00:18:48.168 "data_size": 7936 00:18:48.168 } 00:18:48.168 ] 00:18:48.168 } 00:18:48.168 } 00:18:48.168 }' 00:18:48.168 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:48.427 BaseBdev2' 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:48.427 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.428 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:48.428 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.428 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.428 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:48.428 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:48.428 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:48.428 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.428 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.428 [2024-11-26 19:05:39.738296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:48.686 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.687 "name": "Existed_Raid", 00:18:48.687 "uuid": "b152a873-8f41-4cee-80a5-7d1eafdad8b7", 00:18:48.687 "strip_size_kb": 0, 00:18:48.687 "state": "online", 00:18:48.687 "raid_level": "raid1", 00:18:48.687 "superblock": true, 00:18:48.687 "num_base_bdevs": 2, 00:18:48.687 "num_base_bdevs_discovered": 1, 00:18:48.687 "num_base_bdevs_operational": 1, 00:18:48.687 "base_bdevs_list": [ 00:18:48.687 { 00:18:48.687 "name": null, 00:18:48.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.687 "is_configured": false, 00:18:48.687 "data_offset": 0, 00:18:48.687 "data_size": 7936 00:18:48.687 }, 00:18:48.687 { 00:18:48.687 "name": "BaseBdev2", 00:18:48.687 "uuid": "47563568-4cfe-4a6f-b428-7024a625e41e", 00:18:48.687 "is_configured": true, 00:18:48.687 "data_offset": 256, 00:18:48.687 "data_size": 7936 00:18:48.687 } 00:18:48.687 ] 00:18:48.687 }' 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.687 19:05:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 [2024-11-26 19:05:40.441935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:49.296 [2024-11-26 19:05:40.442254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.296 [2024-11-26 19:05:40.538409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.296 [2024-11-26 19:05:40.538632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.296 [2024-11-26 19:05:40.538669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87693 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87693 ']' 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87693 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87693 00:18:49.296 killing process with pid 87693 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87693' 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87693 00:18:49.296 [2024-11-26 19:05:40.628400] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:49.296 19:05:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87693 00:18:49.296 [2024-11-26 19:05:40.643446] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.673 19:05:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:50.673 00:18:50.673 real 0m5.632s 00:18:50.673 user 0m8.477s 00:18:50.673 sys 0m0.797s 00:18:50.673 19:05:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.673 ************************************ 00:18:50.673 END TEST raid_state_function_test_sb_md_separate 00:18:50.673 ************************************ 00:18:50.673 19:05:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.673 19:05:41 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:50.673 19:05:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:50.673 19:05:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.673 19:05:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.673 ************************************ 00:18:50.673 START TEST raid_superblock_test_md_separate 00:18:50.673 ************************************ 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87944 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87944 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87944 ']' 00:18:50.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.673 19:05:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.673 [2024-11-26 19:05:41.849224] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:18:50.673 [2024-11-26 19:05:41.849372] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87944 ] 00:18:50.673 [2024-11-26 19:05:42.027683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.932 [2024-11-26 19:05:42.183031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.191 [2024-11-26 19:05:42.432925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.191 [2024-11-26 19:05:42.433002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.758 malloc1 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.758 [2024-11-26 19:05:42.914634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:51.758 [2024-11-26 19:05:42.914702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.758 [2024-11-26 19:05:42.914736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:51.758 [2024-11-26 19:05:42.914751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.758 [2024-11-26 19:05:42.917557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.758 pt1 00:18:51.758 [2024-11-26 19:05:42.917731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.758 malloc2 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.758 [2024-11-26 19:05:42.967434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:51.758 [2024-11-26 19:05:42.967652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.758 [2024-11-26 19:05:42.967729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:51.758 [2024-11-26 19:05:42.967934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.758 [2024-11-26 19:05:42.970554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.758 [2024-11-26 19:05:42.970717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:51.758 pt2 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.758 [2024-11-26 19:05:42.979584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:51.758 [2024-11-26 19:05:42.982036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:51.758 [2024-11-26 19:05:42.982276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:51.758 [2024-11-26 19:05:42.982298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:51.758 [2024-11-26 19:05:42.982394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:51.758 [2024-11-26 19:05:42.982565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:51.758 [2024-11-26 19:05:42.982585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:51.758 [2024-11-26 19:05:42.982712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.758 19:05:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.758 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.758 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.758 "name": "raid_bdev1", 00:18:51.758 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:51.758 "strip_size_kb": 0, 00:18:51.758 "state": "online", 00:18:51.758 "raid_level": "raid1", 00:18:51.758 "superblock": true, 00:18:51.758 "num_base_bdevs": 2, 00:18:51.758 "num_base_bdevs_discovered": 2, 00:18:51.758 "num_base_bdevs_operational": 2, 00:18:51.758 "base_bdevs_list": [ 00:18:51.758 { 00:18:51.758 "name": "pt1", 00:18:51.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:51.758 "is_configured": true, 00:18:51.758 "data_offset": 256, 00:18:51.758 "data_size": 7936 00:18:51.758 }, 00:18:51.758 { 00:18:51.758 "name": "pt2", 00:18:51.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:51.758 "is_configured": true, 00:18:51.758 "data_offset": 256, 00:18:51.758 "data_size": 7936 00:18:51.758 } 00:18:51.758 ] 00:18:51.758 }' 00:18:51.758 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.758 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.326 [2024-11-26 19:05:43.496121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:52.326 "name": "raid_bdev1", 00:18:52.326 "aliases": [ 00:18:52.326 "306a1c32-ae59-4aeb-a40c-70a96300a9ba" 00:18:52.326 ], 00:18:52.326 "product_name": "Raid Volume", 00:18:52.326 "block_size": 4096, 00:18:52.326 "num_blocks": 7936, 00:18:52.326 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:52.326 "md_size": 32, 00:18:52.326 "md_interleave": false, 00:18:52.326 "dif_type": 0, 00:18:52.326 "assigned_rate_limits": { 00:18:52.326 "rw_ios_per_sec": 0, 00:18:52.326 "rw_mbytes_per_sec": 0, 00:18:52.326 "r_mbytes_per_sec": 0, 00:18:52.326 "w_mbytes_per_sec": 0 00:18:52.326 }, 00:18:52.326 "claimed": false, 00:18:52.326 "zoned": false, 00:18:52.326 "supported_io_types": { 00:18:52.326 "read": true, 00:18:52.326 "write": true, 00:18:52.326 "unmap": false, 00:18:52.326 "flush": false, 00:18:52.326 "reset": true, 00:18:52.326 "nvme_admin": false, 00:18:52.326 "nvme_io": false, 00:18:52.326 "nvme_io_md": false, 00:18:52.326 "write_zeroes": true, 00:18:52.326 "zcopy": false, 00:18:52.326 "get_zone_info": false, 00:18:52.326 "zone_management": false, 00:18:52.326 "zone_append": false, 00:18:52.326 "compare": false, 00:18:52.326 "compare_and_write": false, 00:18:52.326 "abort": false, 00:18:52.326 "seek_hole": false, 00:18:52.326 "seek_data": false, 00:18:52.326 "copy": false, 00:18:52.326 "nvme_iov_md": false 00:18:52.326 }, 00:18:52.326 "memory_domains": [ 00:18:52.326 { 00:18:52.326 "dma_device_id": "system", 00:18:52.326 "dma_device_type": 1 00:18:52.326 }, 00:18:52.326 { 00:18:52.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.326 "dma_device_type": 2 00:18:52.326 }, 00:18:52.326 { 00:18:52.326 "dma_device_id": "system", 00:18:52.326 "dma_device_type": 1 00:18:52.326 }, 00:18:52.326 { 00:18:52.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.326 "dma_device_type": 2 00:18:52.326 } 00:18:52.326 ], 00:18:52.326 "driver_specific": { 00:18:52.326 "raid": { 00:18:52.326 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:52.326 "strip_size_kb": 0, 00:18:52.326 "state": "online", 00:18:52.326 "raid_level": "raid1", 00:18:52.326 "superblock": true, 00:18:52.326 "num_base_bdevs": 2, 00:18:52.326 "num_base_bdevs_discovered": 2, 00:18:52.326 "num_base_bdevs_operational": 2, 00:18:52.326 "base_bdevs_list": [ 00:18:52.326 { 00:18:52.326 "name": "pt1", 00:18:52.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:52.326 "is_configured": true, 00:18:52.326 "data_offset": 256, 00:18:52.326 "data_size": 7936 00:18:52.326 }, 00:18:52.326 { 00:18:52.326 "name": "pt2", 00:18:52.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:52.326 "is_configured": true, 00:18:52.326 "data_offset": 256, 00:18:52.326 "data_size": 7936 00:18:52.326 } 00:18:52.326 ] 00:18:52.326 } 00:18:52.326 } 00:18:52.326 }' 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:52.326 pt2' 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.326 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.586 [2024-11-26 19:05:43.760084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=306a1c32-ae59-4aeb-a40c-70a96300a9ba 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 306a1c32-ae59-4aeb-a40c-70a96300a9ba ']' 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.586 [2024-11-26 19:05:43.807759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.586 [2024-11-26 19:05:43.807922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.586 [2024-11-26 19:05:43.808159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.586 [2024-11-26 19:05:43.808382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.586 [2024-11-26 19:05:43.808513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.586 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.845 [2024-11-26 19:05:43.951870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:52.845 [2024-11-26 19:05:43.954613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:52.845 [2024-11-26 19:05:43.954850] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:52.845 [2024-11-26 19:05:43.954950] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:52.845 [2024-11-26 19:05:43.954979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.845 [2024-11-26 19:05:43.954995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:52.845 request: 00:18:52.845 { 00:18:52.845 "name": "raid_bdev1", 00:18:52.845 "raid_level": "raid1", 00:18:52.845 "base_bdevs": [ 00:18:52.845 "malloc1", 00:18:52.845 "malloc2" 00:18:52.846 ], 00:18:52.846 "superblock": false, 00:18:52.846 "method": "bdev_raid_create", 00:18:52.846 "req_id": 1 00:18:52.846 } 00:18:52.846 Got JSON-RPC error response 00:18:52.846 response: 00:18:52.846 { 00:18:52.846 "code": -17, 00:18:52.846 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:52.846 } 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:52.846 19:05:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.846 [2024-11-26 19:05:44.015931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:52.846 [2024-11-26 19:05:44.016136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.846 [2024-11-26 19:05:44.016206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:52.846 [2024-11-26 19:05:44.016338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.846 [2024-11-26 19:05:44.019190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.846 [2024-11-26 19:05:44.019238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:52.846 [2024-11-26 19:05:44.019317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:52.846 [2024-11-26 19:05:44.019391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:52.846 pt1 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.846 "name": "raid_bdev1", 00:18:52.846 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:52.846 "strip_size_kb": 0, 00:18:52.846 "state": "configuring", 00:18:52.846 "raid_level": "raid1", 00:18:52.846 "superblock": true, 00:18:52.846 "num_base_bdevs": 2, 00:18:52.846 "num_base_bdevs_discovered": 1, 00:18:52.846 "num_base_bdevs_operational": 2, 00:18:52.846 "base_bdevs_list": [ 00:18:52.846 { 00:18:52.846 "name": "pt1", 00:18:52.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:52.846 "is_configured": true, 00:18:52.846 "data_offset": 256, 00:18:52.846 "data_size": 7936 00:18:52.846 }, 00:18:52.846 { 00:18:52.846 "name": null, 00:18:52.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:52.846 "is_configured": false, 00:18:52.846 "data_offset": 256, 00:18:52.846 "data_size": 7936 00:18:52.846 } 00:18:52.846 ] 00:18:52.846 }' 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.846 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.414 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:53.414 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:53.414 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:53.414 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:53.414 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.414 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.414 [2024-11-26 19:05:44.556050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:53.414 [2024-11-26 19:05:44.556145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.414 [2024-11-26 19:05:44.556178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:53.414 [2024-11-26 19:05:44.556197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.414 [2024-11-26 19:05:44.556533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.414 [2024-11-26 19:05:44.556562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:53.415 [2024-11-26 19:05:44.556627] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:53.415 [2024-11-26 19:05:44.556660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:53.415 [2024-11-26 19:05:44.556793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:53.415 [2024-11-26 19:05:44.556811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:53.415 [2024-11-26 19:05:44.556899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:53.415 [2024-11-26 19:05:44.557096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:53.415 [2024-11-26 19:05:44.557111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:53.415 [2024-11-26 19:05:44.557234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.415 pt2 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.415 "name": "raid_bdev1", 00:18:53.415 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:53.415 "strip_size_kb": 0, 00:18:53.415 "state": "online", 00:18:53.415 "raid_level": "raid1", 00:18:53.415 "superblock": true, 00:18:53.415 "num_base_bdevs": 2, 00:18:53.415 "num_base_bdevs_discovered": 2, 00:18:53.415 "num_base_bdevs_operational": 2, 00:18:53.415 "base_bdevs_list": [ 00:18:53.415 { 00:18:53.415 "name": "pt1", 00:18:53.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:53.415 "is_configured": true, 00:18:53.415 "data_offset": 256, 00:18:53.415 "data_size": 7936 00:18:53.415 }, 00:18:53.415 { 00:18:53.415 "name": "pt2", 00:18:53.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:53.415 "is_configured": true, 00:18:53.415 "data_offset": 256, 00:18:53.415 "data_size": 7936 00:18:53.415 } 00:18:53.415 ] 00:18:53.415 }' 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.415 19:05:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:53.984 [2024-11-26 19:05:45.088535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:53.984 "name": "raid_bdev1", 00:18:53.984 "aliases": [ 00:18:53.984 "306a1c32-ae59-4aeb-a40c-70a96300a9ba" 00:18:53.984 ], 00:18:53.984 "product_name": "Raid Volume", 00:18:53.984 "block_size": 4096, 00:18:53.984 "num_blocks": 7936, 00:18:53.984 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:53.984 "md_size": 32, 00:18:53.984 "md_interleave": false, 00:18:53.984 "dif_type": 0, 00:18:53.984 "assigned_rate_limits": { 00:18:53.984 "rw_ios_per_sec": 0, 00:18:53.984 "rw_mbytes_per_sec": 0, 00:18:53.984 "r_mbytes_per_sec": 0, 00:18:53.984 "w_mbytes_per_sec": 0 00:18:53.984 }, 00:18:53.984 "claimed": false, 00:18:53.984 "zoned": false, 00:18:53.984 "supported_io_types": { 00:18:53.984 "read": true, 00:18:53.984 "write": true, 00:18:53.984 "unmap": false, 00:18:53.984 "flush": false, 00:18:53.984 "reset": true, 00:18:53.984 "nvme_admin": false, 00:18:53.984 "nvme_io": false, 00:18:53.984 "nvme_io_md": false, 00:18:53.984 "write_zeroes": true, 00:18:53.984 "zcopy": false, 00:18:53.984 "get_zone_info": false, 00:18:53.984 "zone_management": false, 00:18:53.984 "zone_append": false, 00:18:53.984 "compare": false, 00:18:53.984 "compare_and_write": false, 00:18:53.984 "abort": false, 00:18:53.984 "seek_hole": false, 00:18:53.984 "seek_data": false, 00:18:53.984 "copy": false, 00:18:53.984 "nvme_iov_md": false 00:18:53.984 }, 00:18:53.984 "memory_domains": [ 00:18:53.984 { 00:18:53.984 "dma_device_id": "system", 00:18:53.984 "dma_device_type": 1 00:18:53.984 }, 00:18:53.984 { 00:18:53.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.984 "dma_device_type": 2 00:18:53.984 }, 00:18:53.984 { 00:18:53.984 "dma_device_id": "system", 00:18:53.984 "dma_device_type": 1 00:18:53.984 }, 00:18:53.984 { 00:18:53.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.984 "dma_device_type": 2 00:18:53.984 } 00:18:53.984 ], 00:18:53.984 "driver_specific": { 00:18:53.984 "raid": { 00:18:53.984 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:53.984 "strip_size_kb": 0, 00:18:53.984 "state": "online", 00:18:53.984 "raid_level": "raid1", 00:18:53.984 "superblock": true, 00:18:53.984 "num_base_bdevs": 2, 00:18:53.984 "num_base_bdevs_discovered": 2, 00:18:53.984 "num_base_bdevs_operational": 2, 00:18:53.984 "base_bdevs_list": [ 00:18:53.984 { 00:18:53.984 "name": "pt1", 00:18:53.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:53.984 "is_configured": true, 00:18:53.984 "data_offset": 256, 00:18:53.984 "data_size": 7936 00:18:53.984 }, 00:18:53.984 { 00:18:53.984 "name": "pt2", 00:18:53.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:53.984 "is_configured": true, 00:18:53.984 "data_offset": 256, 00:18:53.984 "data_size": 7936 00:18:53.984 } 00:18:53.984 ] 00:18:53.984 } 00:18:53.984 } 00:18:53.984 }' 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:53.984 pt2' 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.984 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.985 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:53.985 [2024-11-26 19:05:45.344561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 306a1c32-ae59-4aeb-a40c-70a96300a9ba '!=' 306a1c32-ae59-4aeb-a40c-70a96300a9ba ']' 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.244 [2024-11-26 19:05:45.396274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.244 "name": "raid_bdev1", 00:18:54.244 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:54.244 "strip_size_kb": 0, 00:18:54.244 "state": "online", 00:18:54.244 "raid_level": "raid1", 00:18:54.244 "superblock": true, 00:18:54.244 "num_base_bdevs": 2, 00:18:54.244 "num_base_bdevs_discovered": 1, 00:18:54.244 "num_base_bdevs_operational": 1, 00:18:54.244 "base_bdevs_list": [ 00:18:54.244 { 00:18:54.244 "name": null, 00:18:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.244 "is_configured": false, 00:18:54.244 "data_offset": 0, 00:18:54.244 "data_size": 7936 00:18:54.244 }, 00:18:54.244 { 00:18:54.244 "name": "pt2", 00:18:54.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.244 "is_configured": true, 00:18:54.244 "data_offset": 256, 00:18:54.244 "data_size": 7936 00:18:54.244 } 00:18:54.244 ] 00:18:54.244 }' 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.244 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.813 [2024-11-26 19:05:45.896433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.813 [2024-11-26 19:05:45.896469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.813 [2024-11-26 19:05:45.896566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.813 [2024-11-26 19:05:45.896659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.813 [2024-11-26 19:05:45.896676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.813 [2024-11-26 19:05:45.976428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:54.813 [2024-11-26 19:05:45.976657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.813 [2024-11-26 19:05:45.976725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:54.813 [2024-11-26 19:05:45.976852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.813 [2024-11-26 19:05:45.979692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.813 [2024-11-26 19:05:45.979869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:54.813 [2024-11-26 19:05:45.980069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:54.813 [2024-11-26 19:05:45.980174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:54.813 [2024-11-26 19:05:45.980361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:54.813 [2024-11-26 19:05:45.980535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:54.813 [2024-11-26 19:05:45.980665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:54.813 [2024-11-26 19:05:45.981009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:54.813 [2024-11-26 19:05:45.981124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:54.813 [2024-11-26 19:05:45.981450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.813 pt2 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.813 19:05:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.813 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.813 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.813 "name": "raid_bdev1", 00:18:54.813 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:54.813 "strip_size_kb": 0, 00:18:54.813 "state": "online", 00:18:54.813 "raid_level": "raid1", 00:18:54.813 "superblock": true, 00:18:54.813 "num_base_bdevs": 2, 00:18:54.813 "num_base_bdevs_discovered": 1, 00:18:54.813 "num_base_bdevs_operational": 1, 00:18:54.813 "base_bdevs_list": [ 00:18:54.813 { 00:18:54.813 "name": null, 00:18:54.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.813 "is_configured": false, 00:18:54.813 "data_offset": 256, 00:18:54.813 "data_size": 7936 00:18:54.813 }, 00:18:54.813 { 00:18:54.813 "name": "pt2", 00:18:54.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.813 "is_configured": true, 00:18:54.813 "data_offset": 256, 00:18:54.813 "data_size": 7936 00:18:54.813 } 00:18:54.813 ] 00:18:54.813 }' 00:18:54.813 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.813 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.381 [2024-11-26 19:05:46.480643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.381 [2024-11-26 19:05:46.480681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.381 [2024-11-26 19:05:46.480784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.381 [2024-11-26 19:05:46.480857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.381 [2024-11-26 19:05:46.480871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.381 [2024-11-26 19:05:46.544681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:55.381 [2024-11-26 19:05:46.544903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.381 [2024-11-26 19:05:46.544979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:55.381 [2024-11-26 19:05:46.545219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.381 [2024-11-26 19:05:46.547985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.381 [2024-11-26 19:05:46.548145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:55.381 [2024-11-26 19:05:46.548237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:55.381 [2024-11-26 19:05:46.548308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:55.381 [2024-11-26 19:05:46.548486] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:55.381 [2024-11-26 19:05:46.548503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.381 [2024-11-26 19:05:46.548525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:55.381 [2024-11-26 19:05:46.548606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.381 [2024-11-26 19:05:46.548718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:55.381 [2024-11-26 19:05:46.548733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:55.381 [2024-11-26 19:05:46.548819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:55.381 [2024-11-26 19:05:46.549129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:55.381 [2024-11-26 19:05:46.549187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:55.381 [2024-11-26 19:05:46.549567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.381 pt1 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.381 "name": "raid_bdev1", 00:18:55.381 "uuid": "306a1c32-ae59-4aeb-a40c-70a96300a9ba", 00:18:55.381 "strip_size_kb": 0, 00:18:55.381 "state": "online", 00:18:55.381 "raid_level": "raid1", 00:18:55.381 "superblock": true, 00:18:55.381 "num_base_bdevs": 2, 00:18:55.381 "num_base_bdevs_discovered": 1, 00:18:55.381 "num_base_bdevs_operational": 1, 00:18:55.381 "base_bdevs_list": [ 00:18:55.381 { 00:18:55.381 "name": null, 00:18:55.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.381 "is_configured": false, 00:18:55.381 "data_offset": 256, 00:18:55.381 "data_size": 7936 00:18:55.381 }, 00:18:55.381 { 00:18:55.381 "name": "pt2", 00:18:55.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.381 "is_configured": true, 00:18:55.381 "data_offset": 256, 00:18:55.381 "data_size": 7936 00:18:55.381 } 00:18:55.381 ] 00:18:55.381 }' 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.381 19:05:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.949 [2024-11-26 19:05:47.109325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 306a1c32-ae59-4aeb-a40c-70a96300a9ba '!=' 306a1c32-ae59-4aeb-a40c-70a96300a9ba ']' 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87944 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87944 ']' 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87944 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:55.949 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.950 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87944 00:18:55.950 killing process with pid 87944 00:18:55.950 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.950 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.950 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87944' 00:18:55.950 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87944 00:18:55.950 [2024-11-26 19:05:47.181332] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:55.950 [2024-11-26 19:05:47.181430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.950 [2024-11-26 19:05:47.181495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.950 19:05:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87944 00:18:55.950 [2024-11-26 19:05:47.181519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:56.280 [2024-11-26 19:05:47.377779] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.230 19:05:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:57.230 00:18:57.230 real 0m6.682s 00:18:57.230 user 0m10.570s 00:18:57.230 sys 0m0.972s 00:18:57.230 ************************************ 00:18:57.230 END TEST raid_superblock_test_md_separate 00:18:57.230 19:05:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.230 19:05:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.230 ************************************ 00:18:57.230 19:05:48 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:57.230 19:05:48 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:57.230 19:05:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:57.230 19:05:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.230 19:05:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.230 ************************************ 00:18:57.230 START TEST raid_rebuild_test_sb_md_separate 00:18:57.230 ************************************ 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:57.230 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88274 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88274 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88274 ']' 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.231 19:05:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.489 [2024-11-26 19:05:48.616694] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:18:57.489 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:57.489 Zero copy mechanism will not be used. 00:18:57.489 [2024-11-26 19:05:48.617125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88274 ] 00:18:57.489 [2024-11-26 19:05:48.806595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.748 [2024-11-26 19:05:48.965255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.008 [2024-11-26 19:05:49.186836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.008 [2024-11-26 19:05:49.186890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.575 BaseBdev1_malloc 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.575 [2024-11-26 19:05:49.730088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:58.575 [2024-11-26 19:05:49.730310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.575 [2024-11-26 19:05:49.730468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:58.575 [2024-11-26 19:05:49.730502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.575 [2024-11-26 19:05:49.733180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.575 [2024-11-26 19:05:49.733247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:58.575 BaseBdev1 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.575 BaseBdev2_malloc 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.575 [2024-11-26 19:05:49.777727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:58.575 [2024-11-26 19:05:49.777789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.575 [2024-11-26 19:05:49.777817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:58.575 [2024-11-26 19:05:49.777851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.575 [2024-11-26 19:05:49.780427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.575 [2024-11-26 19:05:49.780489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:58.575 BaseBdev2 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.575 spare_malloc 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.575 spare_delay 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.575 [2024-11-26 19:05:49.850770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:58.575 [2024-11-26 19:05:49.851011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.575 [2024-11-26 19:05:49.851053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:58.575 [2024-11-26 19:05:49.851074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.575 [2024-11-26 19:05:49.853663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.575 [2024-11-26 19:05:49.853728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:58.575 spare 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.575 [2024-11-26 19:05:49.858825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.575 [2024-11-26 19:05:49.861464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.575 [2024-11-26 19:05:49.861878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:58.575 [2024-11-26 19:05:49.862063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:58.575 [2024-11-26 19:05:49.862281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:58.575 [2024-11-26 19:05:49.862575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:58.575 [2024-11-26 19:05:49.862689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:58.575 [2024-11-26 19:05:49.863016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.575 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.575 "name": "raid_bdev1", 00:18:58.575 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:18:58.575 "strip_size_kb": 0, 00:18:58.575 "state": "online", 00:18:58.576 "raid_level": "raid1", 00:18:58.576 "superblock": true, 00:18:58.576 "num_base_bdevs": 2, 00:18:58.576 "num_base_bdevs_discovered": 2, 00:18:58.576 "num_base_bdevs_operational": 2, 00:18:58.576 "base_bdevs_list": [ 00:18:58.576 { 00:18:58.576 "name": "BaseBdev1", 00:18:58.576 "uuid": "c8c4c745-91c2-5a57-86ae-ee1ee0ec96be", 00:18:58.576 "is_configured": true, 00:18:58.576 "data_offset": 256, 00:18:58.576 "data_size": 7936 00:18:58.576 }, 00:18:58.576 { 00:18:58.576 "name": "BaseBdev2", 00:18:58.576 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:18:58.576 "is_configured": true, 00:18:58.576 "data_offset": 256, 00:18:58.576 "data_size": 7936 00:18:58.576 } 00:18:58.576 ] 00:18:58.576 }' 00:18:58.576 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.576 19:05:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.143 [2024-11-26 19:05:50.379567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.143 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:59.711 [2024-11-26 19:05:50.779407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:59.711 /dev/nbd0 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:59.711 1+0 records in 00:18:59.711 1+0 records out 00:18:59.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551906 s, 7.4 MB/s 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:59.711 19:05:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:00.647 7936+0 records in 00:19:00.647 7936+0 records out 00:19:00.647 32505856 bytes (33 MB, 31 MiB) copied, 0.934275 s, 34.8 MB/s 00:19:00.647 19:05:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:00.647 19:05:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:00.647 19:05:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:00.647 19:05:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:00.647 19:05:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:00.647 19:05:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.647 19:05:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:00.907 [2024-11-26 19:05:52.081954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.907 [2024-11-26 19:05:52.090059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.907 "name": "raid_bdev1", 00:19:00.907 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:00.907 "strip_size_kb": 0, 00:19:00.907 "state": "online", 00:19:00.907 "raid_level": "raid1", 00:19:00.907 "superblock": true, 00:19:00.907 "num_base_bdevs": 2, 00:19:00.907 "num_base_bdevs_discovered": 1, 00:19:00.907 "num_base_bdevs_operational": 1, 00:19:00.907 "base_bdevs_list": [ 00:19:00.907 { 00:19:00.907 "name": null, 00:19:00.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.907 "is_configured": false, 00:19:00.907 "data_offset": 0, 00:19:00.907 "data_size": 7936 00:19:00.907 }, 00:19:00.907 { 00:19:00.907 "name": "BaseBdev2", 00:19:00.907 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:00.907 "is_configured": true, 00:19:00.907 "data_offset": 256, 00:19:00.907 "data_size": 7936 00:19:00.907 } 00:19:00.907 ] 00:19:00.907 }' 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.907 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.476 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.476 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.476 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.476 [2024-11-26 19:05:52.638262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.476 [2024-11-26 19:05:52.652273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:01.476 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.476 19:05:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:01.476 [2024-11-26 19:05:52.654773] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.461 "name": "raid_bdev1", 00:19:02.461 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:02.461 "strip_size_kb": 0, 00:19:02.461 "state": "online", 00:19:02.461 "raid_level": "raid1", 00:19:02.461 "superblock": true, 00:19:02.461 "num_base_bdevs": 2, 00:19:02.461 "num_base_bdevs_discovered": 2, 00:19:02.461 "num_base_bdevs_operational": 2, 00:19:02.461 "process": { 00:19:02.461 "type": "rebuild", 00:19:02.461 "target": "spare", 00:19:02.461 "progress": { 00:19:02.461 "blocks": 2560, 00:19:02.461 "percent": 32 00:19:02.461 } 00:19:02.461 }, 00:19:02.461 "base_bdevs_list": [ 00:19:02.461 { 00:19:02.461 "name": "spare", 00:19:02.461 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:02.461 "is_configured": true, 00:19:02.461 "data_offset": 256, 00:19:02.461 "data_size": 7936 00:19:02.461 }, 00:19:02.461 { 00:19:02.461 "name": "BaseBdev2", 00:19:02.461 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:02.461 "is_configured": true, 00:19:02.461 "data_offset": 256, 00:19:02.461 "data_size": 7936 00:19:02.461 } 00:19:02.461 ] 00:19:02.461 }' 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.461 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.720 [2024-11-26 19:05:53.828416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:02.720 [2024-11-26 19:05:53.863747] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:02.720 [2024-11-26 19:05:53.864062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.720 [2024-11-26 19:05:53.864092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:02.720 [2024-11-26 19:05:53.864113] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.720 "name": "raid_bdev1", 00:19:02.720 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:02.720 "strip_size_kb": 0, 00:19:02.720 "state": "online", 00:19:02.720 "raid_level": "raid1", 00:19:02.720 "superblock": true, 00:19:02.720 "num_base_bdevs": 2, 00:19:02.720 "num_base_bdevs_discovered": 1, 00:19:02.720 "num_base_bdevs_operational": 1, 00:19:02.720 "base_bdevs_list": [ 00:19:02.720 { 00:19:02.720 "name": null, 00:19:02.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.720 "is_configured": false, 00:19:02.720 "data_offset": 0, 00:19:02.720 "data_size": 7936 00:19:02.720 }, 00:19:02.720 { 00:19:02.720 "name": "BaseBdev2", 00:19:02.720 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:02.720 "is_configured": true, 00:19:02.720 "data_offset": 256, 00:19:02.720 "data_size": 7936 00:19:02.720 } 00:19:02.720 ] 00:19:02.720 }' 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.720 19:05:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.316 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.316 "name": "raid_bdev1", 00:19:03.316 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:03.316 "strip_size_kb": 0, 00:19:03.316 "state": "online", 00:19:03.316 "raid_level": "raid1", 00:19:03.316 "superblock": true, 00:19:03.316 "num_base_bdevs": 2, 00:19:03.316 "num_base_bdevs_discovered": 1, 00:19:03.316 "num_base_bdevs_operational": 1, 00:19:03.316 "base_bdevs_list": [ 00:19:03.316 { 00:19:03.316 "name": null, 00:19:03.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.316 "is_configured": false, 00:19:03.316 "data_offset": 0, 00:19:03.316 "data_size": 7936 00:19:03.316 }, 00:19:03.316 { 00:19:03.316 "name": "BaseBdev2", 00:19:03.316 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:03.316 "is_configured": true, 00:19:03.316 "data_offset": 256, 00:19:03.316 "data_size": 7936 00:19:03.316 } 00:19:03.316 ] 00:19:03.317 }' 00:19:03.317 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.317 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.317 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.317 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.317 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:03.317 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.317 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.317 [2024-11-26 19:05:54.538463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.317 [2024-11-26 19:05:54.551938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:03.317 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.317 19:05:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:03.317 [2024-11-26 19:05:54.554538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.253 "name": "raid_bdev1", 00:19:04.253 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:04.253 "strip_size_kb": 0, 00:19:04.253 "state": "online", 00:19:04.253 "raid_level": "raid1", 00:19:04.253 "superblock": true, 00:19:04.253 "num_base_bdevs": 2, 00:19:04.253 "num_base_bdevs_discovered": 2, 00:19:04.253 "num_base_bdevs_operational": 2, 00:19:04.253 "process": { 00:19:04.253 "type": "rebuild", 00:19:04.253 "target": "spare", 00:19:04.253 "progress": { 00:19:04.253 "blocks": 2560, 00:19:04.253 "percent": 32 00:19:04.253 } 00:19:04.253 }, 00:19:04.253 "base_bdevs_list": [ 00:19:04.253 { 00:19:04.253 "name": "spare", 00:19:04.253 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:04.253 "is_configured": true, 00:19:04.253 "data_offset": 256, 00:19:04.253 "data_size": 7936 00:19:04.253 }, 00:19:04.253 { 00:19:04.253 "name": "BaseBdev2", 00:19:04.253 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:04.253 "is_configured": true, 00:19:04.253 "data_offset": 256, 00:19:04.253 "data_size": 7936 00:19:04.253 } 00:19:04.253 ] 00:19:04.253 }' 00:19:04.253 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:04.512 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=774 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.512 "name": "raid_bdev1", 00:19:04.512 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:04.512 "strip_size_kb": 0, 00:19:04.512 "state": "online", 00:19:04.512 "raid_level": "raid1", 00:19:04.512 "superblock": true, 00:19:04.512 "num_base_bdevs": 2, 00:19:04.512 "num_base_bdevs_discovered": 2, 00:19:04.512 "num_base_bdevs_operational": 2, 00:19:04.512 "process": { 00:19:04.512 "type": "rebuild", 00:19:04.512 "target": "spare", 00:19:04.512 "progress": { 00:19:04.512 "blocks": 2816, 00:19:04.512 "percent": 35 00:19:04.512 } 00:19:04.512 }, 00:19:04.512 "base_bdevs_list": [ 00:19:04.512 { 00:19:04.512 "name": "spare", 00:19:04.512 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:04.512 "is_configured": true, 00:19:04.512 "data_offset": 256, 00:19:04.512 "data_size": 7936 00:19:04.512 }, 00:19:04.512 { 00:19:04.512 "name": "BaseBdev2", 00:19:04.512 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:04.512 "is_configured": true, 00:19:04.512 "data_offset": 256, 00:19:04.512 "data_size": 7936 00:19:04.512 } 00:19:04.512 ] 00:19:04.512 }' 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.512 19:05:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.889 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.889 "name": "raid_bdev1", 00:19:05.889 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:05.889 "strip_size_kb": 0, 00:19:05.889 "state": "online", 00:19:05.889 "raid_level": "raid1", 00:19:05.889 "superblock": true, 00:19:05.889 "num_base_bdevs": 2, 00:19:05.889 "num_base_bdevs_discovered": 2, 00:19:05.889 "num_base_bdevs_operational": 2, 00:19:05.889 "process": { 00:19:05.889 "type": "rebuild", 00:19:05.889 "target": "spare", 00:19:05.889 "progress": { 00:19:05.889 "blocks": 5888, 00:19:05.889 "percent": 74 00:19:05.889 } 00:19:05.889 }, 00:19:05.889 "base_bdevs_list": [ 00:19:05.889 { 00:19:05.889 "name": "spare", 00:19:05.889 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:05.889 "is_configured": true, 00:19:05.889 "data_offset": 256, 00:19:05.889 "data_size": 7936 00:19:05.889 }, 00:19:05.890 { 00:19:05.890 "name": "BaseBdev2", 00:19:05.890 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:05.890 "is_configured": true, 00:19:05.890 "data_offset": 256, 00:19:05.890 "data_size": 7936 00:19:05.890 } 00:19:05.890 ] 00:19:05.890 }' 00:19:05.890 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.890 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.890 19:05:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.890 19:05:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.890 19:05:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:06.457 [2024-11-26 19:05:57.678877] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:06.457 [2024-11-26 19:05:57.678997] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:06.457 [2024-11-26 19:05:57.679167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.716 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.975 "name": "raid_bdev1", 00:19:06.975 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:06.975 "strip_size_kb": 0, 00:19:06.975 "state": "online", 00:19:06.975 "raid_level": "raid1", 00:19:06.975 "superblock": true, 00:19:06.975 "num_base_bdevs": 2, 00:19:06.975 "num_base_bdevs_discovered": 2, 00:19:06.975 "num_base_bdevs_operational": 2, 00:19:06.975 "base_bdevs_list": [ 00:19:06.975 { 00:19:06.975 "name": "spare", 00:19:06.975 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:06.975 "is_configured": true, 00:19:06.975 "data_offset": 256, 00:19:06.975 "data_size": 7936 00:19:06.975 }, 00:19:06.975 { 00:19:06.975 "name": "BaseBdev2", 00:19:06.975 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:06.975 "is_configured": true, 00:19:06.975 "data_offset": 256, 00:19:06.975 "data_size": 7936 00:19:06.975 } 00:19:06.975 ] 00:19:06.975 }' 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.975 "name": "raid_bdev1", 00:19:06.975 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:06.975 "strip_size_kb": 0, 00:19:06.975 "state": "online", 00:19:06.975 "raid_level": "raid1", 00:19:06.975 "superblock": true, 00:19:06.975 "num_base_bdevs": 2, 00:19:06.975 "num_base_bdevs_discovered": 2, 00:19:06.975 "num_base_bdevs_operational": 2, 00:19:06.975 "base_bdevs_list": [ 00:19:06.975 { 00:19:06.975 "name": "spare", 00:19:06.975 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:06.975 "is_configured": true, 00:19:06.975 "data_offset": 256, 00:19:06.975 "data_size": 7936 00:19:06.975 }, 00:19:06.975 { 00:19:06.975 "name": "BaseBdev2", 00:19:06.975 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:06.975 "is_configured": true, 00:19:06.975 "data_offset": 256, 00:19:06.975 "data_size": 7936 00:19:06.975 } 00:19:06.975 ] 00:19:06.975 }' 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:06.975 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.234 "name": "raid_bdev1", 00:19:07.234 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:07.234 "strip_size_kb": 0, 00:19:07.234 "state": "online", 00:19:07.234 "raid_level": "raid1", 00:19:07.234 "superblock": true, 00:19:07.234 "num_base_bdevs": 2, 00:19:07.234 "num_base_bdevs_discovered": 2, 00:19:07.234 "num_base_bdevs_operational": 2, 00:19:07.234 "base_bdevs_list": [ 00:19:07.234 { 00:19:07.234 "name": "spare", 00:19:07.234 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:07.234 "is_configured": true, 00:19:07.234 "data_offset": 256, 00:19:07.234 "data_size": 7936 00:19:07.234 }, 00:19:07.234 { 00:19:07.234 "name": "BaseBdev2", 00:19:07.234 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:07.234 "is_configured": true, 00:19:07.234 "data_offset": 256, 00:19:07.234 "data_size": 7936 00:19:07.234 } 00:19:07.234 ] 00:19:07.234 }' 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.234 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.802 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.803 [2024-11-26 19:05:58.906288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.803 [2024-11-26 19:05:58.907587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.803 [2024-11-26 19:05:58.907726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.803 [2024-11-26 19:05:58.907830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.803 [2024-11-26 19:05:58.907868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:07.803 19:05:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:08.061 /dev/nbd0 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.061 1+0 records in 00:19:08.061 1+0 records out 00:19:08.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355607 s, 11.5 MB/s 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.061 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:08.319 /dev/nbd1 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.319 1+0 records in 00:19:08.319 1+0 records out 00:19:08.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053558 s, 7.6 MB/s 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.319 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:08.578 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:08.578 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:08.578 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:08.578 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:08.578 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:08.578 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.578 19:05:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.836 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.098 [2024-11-26 19:06:00.395987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:09.098 [2024-11-26 19:06:00.396199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.098 [2024-11-26 19:06:00.396247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:09.098 [2024-11-26 19:06:00.396264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.098 [2024-11-26 19:06:00.399023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.098 [2024-11-26 19:06:00.399068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:09.098 [2024-11-26 19:06:00.399162] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:09.098 [2024-11-26 19:06:00.399230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.098 [2024-11-26 19:06:00.399411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.098 spare 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.098 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.375 [2024-11-26 19:06:00.499725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:09.375 [2024-11-26 19:06:00.500032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:09.375 [2024-11-26 19:06:00.500272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:09.375 [2024-11-26 19:06:00.500627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:09.375 [2024-11-26 19:06:00.500765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:09.375 [2024-11-26 19:06:00.501016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.375 "name": "raid_bdev1", 00:19:09.375 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:09.375 "strip_size_kb": 0, 00:19:09.375 "state": "online", 00:19:09.375 "raid_level": "raid1", 00:19:09.375 "superblock": true, 00:19:09.375 "num_base_bdevs": 2, 00:19:09.375 "num_base_bdevs_discovered": 2, 00:19:09.375 "num_base_bdevs_operational": 2, 00:19:09.375 "base_bdevs_list": [ 00:19:09.375 { 00:19:09.375 "name": "spare", 00:19:09.375 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:09.375 "is_configured": true, 00:19:09.375 "data_offset": 256, 00:19:09.375 "data_size": 7936 00:19:09.375 }, 00:19:09.375 { 00:19:09.375 "name": "BaseBdev2", 00:19:09.375 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:09.375 "is_configured": true, 00:19:09.375 "data_offset": 256, 00:19:09.375 "data_size": 7936 00:19:09.375 } 00:19:09.375 ] 00:19:09.375 }' 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.375 19:06:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.942 "name": "raid_bdev1", 00:19:09.942 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:09.942 "strip_size_kb": 0, 00:19:09.942 "state": "online", 00:19:09.942 "raid_level": "raid1", 00:19:09.942 "superblock": true, 00:19:09.942 "num_base_bdevs": 2, 00:19:09.942 "num_base_bdevs_discovered": 2, 00:19:09.942 "num_base_bdevs_operational": 2, 00:19:09.942 "base_bdevs_list": [ 00:19:09.942 { 00:19:09.942 "name": "spare", 00:19:09.942 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:09.942 "is_configured": true, 00:19:09.942 "data_offset": 256, 00:19:09.942 "data_size": 7936 00:19:09.942 }, 00:19:09.942 { 00:19:09.942 "name": "BaseBdev2", 00:19:09.942 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:09.942 "is_configured": true, 00:19:09.942 "data_offset": 256, 00:19:09.942 "data_size": 7936 00:19:09.942 } 00:19:09.942 ] 00:19:09.942 }' 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.942 [2024-11-26 19:06:01.265300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.942 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.943 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.202 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.202 "name": "raid_bdev1", 00:19:10.202 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:10.202 "strip_size_kb": 0, 00:19:10.202 "state": "online", 00:19:10.202 "raid_level": "raid1", 00:19:10.202 "superblock": true, 00:19:10.202 "num_base_bdevs": 2, 00:19:10.202 "num_base_bdevs_discovered": 1, 00:19:10.202 "num_base_bdevs_operational": 1, 00:19:10.202 "base_bdevs_list": [ 00:19:10.202 { 00:19:10.202 "name": null, 00:19:10.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.202 "is_configured": false, 00:19:10.202 "data_offset": 0, 00:19:10.202 "data_size": 7936 00:19:10.202 }, 00:19:10.202 { 00:19:10.202 "name": "BaseBdev2", 00:19:10.202 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:10.202 "is_configured": true, 00:19:10.202 "data_offset": 256, 00:19:10.202 "data_size": 7936 00:19:10.202 } 00:19:10.202 ] 00:19:10.202 }' 00:19:10.202 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.202 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.461 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.461 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.461 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 [2024-11-26 19:06:01.825473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.719 [2024-11-26 19:06:01.825743] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:10.719 [2024-11-26 19:06:01.825768] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:10.719 [2024-11-26 19:06:01.825854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.719 [2024-11-26 19:06:01.838965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:10.719 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 19:06:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:10.719 [2024-11-26 19:06:01.841552] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.656 "name": "raid_bdev1", 00:19:11.656 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:11.656 "strip_size_kb": 0, 00:19:11.656 "state": "online", 00:19:11.656 "raid_level": "raid1", 00:19:11.656 "superblock": true, 00:19:11.656 "num_base_bdevs": 2, 00:19:11.656 "num_base_bdevs_discovered": 2, 00:19:11.656 "num_base_bdevs_operational": 2, 00:19:11.656 "process": { 00:19:11.656 "type": "rebuild", 00:19:11.656 "target": "spare", 00:19:11.656 "progress": { 00:19:11.656 "blocks": 2560, 00:19:11.656 "percent": 32 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 "base_bdevs_list": [ 00:19:11.656 { 00:19:11.656 "name": "spare", 00:19:11.656 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:11.656 "is_configured": true, 00:19:11.656 "data_offset": 256, 00:19:11.656 "data_size": 7936 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "name": "BaseBdev2", 00:19:11.656 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:11.656 "is_configured": true, 00:19:11.656 "data_offset": 256, 00:19:11.656 "data_size": 7936 00:19:11.656 } 00:19:11.656 ] 00:19:11.656 }' 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.656 19:06:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.656 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.656 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:11.656 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.656 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.656 [2024-11-26 19:06:03.007578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.916 [2024-11-26 19:06:03.051211] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:11.916 [2024-11-26 19:06:03.051592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.916 [2024-11-26 19:06:03.051622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.916 [2024-11-26 19:06:03.051653] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.916 "name": "raid_bdev1", 00:19:11.916 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:11.916 "strip_size_kb": 0, 00:19:11.916 "state": "online", 00:19:11.916 "raid_level": "raid1", 00:19:11.916 "superblock": true, 00:19:11.916 "num_base_bdevs": 2, 00:19:11.916 "num_base_bdevs_discovered": 1, 00:19:11.916 "num_base_bdevs_operational": 1, 00:19:11.916 "base_bdevs_list": [ 00:19:11.916 { 00:19:11.916 "name": null, 00:19:11.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.916 "is_configured": false, 00:19:11.916 "data_offset": 0, 00:19:11.916 "data_size": 7936 00:19:11.916 }, 00:19:11.916 { 00:19:11.916 "name": "BaseBdev2", 00:19:11.916 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:11.916 "is_configured": true, 00:19:11.916 "data_offset": 256, 00:19:11.916 "data_size": 7936 00:19:11.916 } 00:19:11.916 ] 00:19:11.916 }' 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.916 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.485 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:12.485 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.485 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.485 [2024-11-26 19:06:03.611326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:12.485 [2024-11-26 19:06:03.611425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.485 [2024-11-26 19:06:03.611461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:12.485 [2024-11-26 19:06:03.611479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.485 [2024-11-26 19:06:03.611823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.485 [2024-11-26 19:06:03.611881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:12.485 [2024-11-26 19:06:03.611989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:12.485 [2024-11-26 19:06:03.612015] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:12.485 [2024-11-26 19:06:03.612030] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:12.485 [2024-11-26 19:06:03.612062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.485 spare 00:19:12.485 [2024-11-26 19:06:03.625469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:12.485 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.485 19:06:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:12.485 [2024-11-26 19:06:03.628204] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.423 "name": "raid_bdev1", 00:19:13.423 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:13.423 "strip_size_kb": 0, 00:19:13.423 "state": "online", 00:19:13.423 "raid_level": "raid1", 00:19:13.423 "superblock": true, 00:19:13.423 "num_base_bdevs": 2, 00:19:13.423 "num_base_bdevs_discovered": 2, 00:19:13.423 "num_base_bdevs_operational": 2, 00:19:13.423 "process": { 00:19:13.423 "type": "rebuild", 00:19:13.423 "target": "spare", 00:19:13.423 "progress": { 00:19:13.423 "blocks": 2560, 00:19:13.423 "percent": 32 00:19:13.423 } 00:19:13.423 }, 00:19:13.423 "base_bdevs_list": [ 00:19:13.423 { 00:19:13.423 "name": "spare", 00:19:13.423 "uuid": "9627b474-eeea-5a77-a712-94f35832bfbe", 00:19:13.423 "is_configured": true, 00:19:13.423 "data_offset": 256, 00:19:13.423 "data_size": 7936 00:19:13.423 }, 00:19:13.423 { 00:19:13.423 "name": "BaseBdev2", 00:19:13.423 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:13.423 "is_configured": true, 00:19:13.423 "data_offset": 256, 00:19:13.423 "data_size": 7936 00:19:13.423 } 00:19:13.423 ] 00:19:13.423 }' 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.423 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.682 [2024-11-26 19:06:04.802338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.682 [2024-11-26 19:06:04.838325] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.682 [2024-11-26 19:06:04.838639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.682 [2024-11-26 19:06:04.838675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.682 [2024-11-26 19:06:04.838689] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.682 "name": "raid_bdev1", 00:19:13.682 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:13.682 "strip_size_kb": 0, 00:19:13.682 "state": "online", 00:19:13.682 "raid_level": "raid1", 00:19:13.682 "superblock": true, 00:19:13.682 "num_base_bdevs": 2, 00:19:13.682 "num_base_bdevs_discovered": 1, 00:19:13.682 "num_base_bdevs_operational": 1, 00:19:13.682 "base_bdevs_list": [ 00:19:13.682 { 00:19:13.682 "name": null, 00:19:13.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.682 "is_configured": false, 00:19:13.682 "data_offset": 0, 00:19:13.682 "data_size": 7936 00:19:13.682 }, 00:19:13.682 { 00:19:13.682 "name": "BaseBdev2", 00:19:13.682 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:13.682 "is_configured": true, 00:19:13.682 "data_offset": 256, 00:19:13.682 "data_size": 7936 00:19:13.682 } 00:19:13.682 ] 00:19:13.682 }' 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.682 19:06:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.250 "name": "raid_bdev1", 00:19:14.250 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:14.250 "strip_size_kb": 0, 00:19:14.250 "state": "online", 00:19:14.250 "raid_level": "raid1", 00:19:14.250 "superblock": true, 00:19:14.250 "num_base_bdevs": 2, 00:19:14.250 "num_base_bdevs_discovered": 1, 00:19:14.250 "num_base_bdevs_operational": 1, 00:19:14.250 "base_bdevs_list": [ 00:19:14.250 { 00:19:14.250 "name": null, 00:19:14.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.250 "is_configured": false, 00:19:14.250 "data_offset": 0, 00:19:14.250 "data_size": 7936 00:19:14.250 }, 00:19:14.250 { 00:19:14.250 "name": "BaseBdev2", 00:19:14.250 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:14.250 "is_configured": true, 00:19:14.250 "data_offset": 256, 00:19:14.250 "data_size": 7936 00:19:14.250 } 00:19:14.250 ] 00:19:14.250 }' 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.250 [2024-11-26 19:06:05.549849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:14.250 [2024-11-26 19:06:05.550050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.250 [2024-11-26 19:06:05.550097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:14.250 [2024-11-26 19:06:05.550115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.250 [2024-11-26 19:06:05.550402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.250 [2024-11-26 19:06:05.550424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:14.250 [2024-11-26 19:06:05.550498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:14.250 [2024-11-26 19:06:05.550519] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:14.250 [2024-11-26 19:06:05.550534] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:14.250 [2024-11-26 19:06:05.550548] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:14.250 BaseBdev1 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.250 19:06:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:15.216 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.216 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.217 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.477 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.477 "name": "raid_bdev1", 00:19:15.477 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:15.477 "strip_size_kb": 0, 00:19:15.477 "state": "online", 00:19:15.477 "raid_level": "raid1", 00:19:15.477 "superblock": true, 00:19:15.477 "num_base_bdevs": 2, 00:19:15.477 "num_base_bdevs_discovered": 1, 00:19:15.477 "num_base_bdevs_operational": 1, 00:19:15.477 "base_bdevs_list": [ 00:19:15.477 { 00:19:15.477 "name": null, 00:19:15.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.477 "is_configured": false, 00:19:15.477 "data_offset": 0, 00:19:15.477 "data_size": 7936 00:19:15.477 }, 00:19:15.477 { 00:19:15.477 "name": "BaseBdev2", 00:19:15.477 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:15.477 "is_configured": true, 00:19:15.477 "data_offset": 256, 00:19:15.477 "data_size": 7936 00:19:15.477 } 00:19:15.477 ] 00:19:15.477 }' 00:19:15.477 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.477 19:06:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.737 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.737 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.737 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.737 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.737 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.737 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.737 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.737 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.737 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.997 "name": "raid_bdev1", 00:19:15.997 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:15.997 "strip_size_kb": 0, 00:19:15.997 "state": "online", 00:19:15.997 "raid_level": "raid1", 00:19:15.997 "superblock": true, 00:19:15.997 "num_base_bdevs": 2, 00:19:15.997 "num_base_bdevs_discovered": 1, 00:19:15.997 "num_base_bdevs_operational": 1, 00:19:15.997 "base_bdevs_list": [ 00:19:15.997 { 00:19:15.997 "name": null, 00:19:15.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.997 "is_configured": false, 00:19:15.997 "data_offset": 0, 00:19:15.997 "data_size": 7936 00:19:15.997 }, 00:19:15.997 { 00:19:15.997 "name": "BaseBdev2", 00:19:15.997 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:15.997 "is_configured": true, 00:19:15.997 "data_offset": 256, 00:19:15.997 "data_size": 7936 00:19:15.997 } 00:19:15.997 ] 00:19:15.997 }' 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.997 [2024-11-26 19:06:07.250507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.997 [2024-11-26 19:06:07.250875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:15.997 [2024-11-26 19:06:07.250938] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:15.997 request: 00:19:15.997 { 00:19:15.997 "base_bdev": "BaseBdev1", 00:19:15.997 "raid_bdev": "raid_bdev1", 00:19:15.997 "method": "bdev_raid_add_base_bdev", 00:19:15.997 "req_id": 1 00:19:15.997 } 00:19:15.997 Got JSON-RPC error response 00:19:15.997 response: 00:19:15.997 { 00:19:15.997 "code": -22, 00:19:15.997 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:15.997 } 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:15.997 19:06:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:16.934 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:16.934 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.934 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.934 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.935 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.194 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.194 "name": "raid_bdev1", 00:19:17.194 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:17.194 "strip_size_kb": 0, 00:19:17.194 "state": "online", 00:19:17.194 "raid_level": "raid1", 00:19:17.194 "superblock": true, 00:19:17.194 "num_base_bdevs": 2, 00:19:17.194 "num_base_bdevs_discovered": 1, 00:19:17.194 "num_base_bdevs_operational": 1, 00:19:17.194 "base_bdevs_list": [ 00:19:17.194 { 00:19:17.194 "name": null, 00:19:17.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.194 "is_configured": false, 00:19:17.194 "data_offset": 0, 00:19:17.194 "data_size": 7936 00:19:17.194 }, 00:19:17.194 { 00:19:17.194 "name": "BaseBdev2", 00:19:17.194 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:17.194 "is_configured": true, 00:19:17.194 "data_offset": 256, 00:19:17.194 "data_size": 7936 00:19:17.194 } 00:19:17.194 ] 00:19:17.194 }' 00:19:17.194 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.194 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.453 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.712 "name": "raid_bdev1", 00:19:17.712 "uuid": "2505248d-8d76-4430-a6e9-888a165c9122", 00:19:17.712 "strip_size_kb": 0, 00:19:17.712 "state": "online", 00:19:17.712 "raid_level": "raid1", 00:19:17.712 "superblock": true, 00:19:17.712 "num_base_bdevs": 2, 00:19:17.712 "num_base_bdevs_discovered": 1, 00:19:17.712 "num_base_bdevs_operational": 1, 00:19:17.712 "base_bdevs_list": [ 00:19:17.712 { 00:19:17.712 "name": null, 00:19:17.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.712 "is_configured": false, 00:19:17.712 "data_offset": 0, 00:19:17.712 "data_size": 7936 00:19:17.712 }, 00:19:17.712 { 00:19:17.712 "name": "BaseBdev2", 00:19:17.712 "uuid": "337d6519-fea0-5342-a703-4d0e63d0d35a", 00:19:17.712 "is_configured": true, 00:19:17.712 "data_offset": 256, 00:19:17.712 "data_size": 7936 00:19:17.712 } 00:19:17.712 ] 00:19:17.712 }' 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88274 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88274 ']' 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88274 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88274 00:19:17.712 killing process with pid 88274 00:19:17.712 Received shutdown signal, test time was about 60.000000 seconds 00:19:17.712 00:19:17.712 Latency(us) 00:19:17.712 [2024-11-26T19:06:09.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.712 [2024-11-26T19:06:09.079Z] =================================================================================================================== 00:19:17.712 [2024-11-26T19:06:09.079Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88274' 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88274 00:19:17.712 [2024-11-26 19:06:08.977200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.712 19:06:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88274 00:19:17.712 [2024-11-26 19:06:08.977390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.712 [2024-11-26 19:06:08.977471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.712 [2024-11-26 19:06:08.977489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:17.972 [2024-11-26 19:06:09.268998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.349 19:06:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:19.349 00:19:19.349 real 0m21.809s 00:19:19.349 user 0m29.558s 00:19:19.349 sys 0m2.693s 00:19:19.349 ************************************ 00:19:19.349 END TEST raid_rebuild_test_sb_md_separate 00:19:19.349 ************************************ 00:19:19.349 19:06:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.349 19:06:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.349 19:06:10 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:19.349 19:06:10 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:19.349 19:06:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:19.349 19:06:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.349 19:06:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.349 ************************************ 00:19:19.349 START TEST raid_state_function_test_sb_md_interleaved 00:19:19.349 ************************************ 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:19.349 Process raid pid: 88976 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88976 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88976' 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88976 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88976 ']' 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.349 19:06:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.349 [2024-11-26 19:06:10.482129] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:19:19.349 [2024-11-26 19:06:10.482539] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.349 [2024-11-26 19:06:10.663362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.608 [2024-11-26 19:06:10.794655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.868 [2024-11-26 19:06:11.001977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.868 [2024-11-26 19:06:11.002032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.127 [2024-11-26 19:06:11.454106] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:20.127 [2024-11-26 19:06:11.454352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:20.127 [2024-11-26 19:06:11.454479] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:20.127 [2024-11-26 19:06:11.454614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.127 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.386 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.386 "name": "Existed_Raid", 00:19:20.386 "uuid": "99f18caf-ec7a-4638-91fd-ff2c118a25b3", 00:19:20.386 "strip_size_kb": 0, 00:19:20.386 "state": "configuring", 00:19:20.386 "raid_level": "raid1", 00:19:20.386 "superblock": true, 00:19:20.386 "num_base_bdevs": 2, 00:19:20.386 "num_base_bdevs_discovered": 0, 00:19:20.386 "num_base_bdevs_operational": 2, 00:19:20.386 "base_bdevs_list": [ 00:19:20.386 { 00:19:20.386 "name": "BaseBdev1", 00:19:20.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.386 "is_configured": false, 00:19:20.386 "data_offset": 0, 00:19:20.386 "data_size": 0 00:19:20.386 }, 00:19:20.386 { 00:19:20.386 "name": "BaseBdev2", 00:19:20.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.386 "is_configured": false, 00:19:20.386 "data_offset": 0, 00:19:20.386 "data_size": 0 00:19:20.386 } 00:19:20.386 ] 00:19:20.386 }' 00:19:20.386 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.386 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.645 [2024-11-26 19:06:11.942274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:20.645 [2024-11-26 19:06:11.942530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.645 [2024-11-26 19:06:11.950232] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:20.645 [2024-11-26 19:06:11.950287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:20.645 [2024-11-26 19:06:11.950303] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:20.645 [2024-11-26 19:06:11.950337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.645 BaseBdev1 00:19:20.645 [2024-11-26 19:06:11.995323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.645 19:06:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.645 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.645 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:20.645 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.645 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.904 [ 00:19:20.904 { 00:19:20.904 "name": "BaseBdev1", 00:19:20.904 "aliases": [ 00:19:20.904 "3107559e-769a-42b5-ab98-612f9891ee63" 00:19:20.904 ], 00:19:20.904 "product_name": "Malloc disk", 00:19:20.904 "block_size": 4128, 00:19:20.904 "num_blocks": 8192, 00:19:20.904 "uuid": "3107559e-769a-42b5-ab98-612f9891ee63", 00:19:20.904 "md_size": 32, 00:19:20.904 "md_interleave": true, 00:19:20.904 "dif_type": 0, 00:19:20.904 "assigned_rate_limits": { 00:19:20.904 "rw_ios_per_sec": 0, 00:19:20.904 "rw_mbytes_per_sec": 0, 00:19:20.904 "r_mbytes_per_sec": 0, 00:19:20.904 "w_mbytes_per_sec": 0 00:19:20.904 }, 00:19:20.904 "claimed": true, 00:19:20.904 "claim_type": "exclusive_write", 00:19:20.904 "zoned": false, 00:19:20.904 "supported_io_types": { 00:19:20.904 "read": true, 00:19:20.904 "write": true, 00:19:20.904 "unmap": true, 00:19:20.904 "flush": true, 00:19:20.904 "reset": true, 00:19:20.904 "nvme_admin": false, 00:19:20.904 "nvme_io": false, 00:19:20.904 "nvme_io_md": false, 00:19:20.904 "write_zeroes": true, 00:19:20.904 "zcopy": true, 00:19:20.904 "get_zone_info": false, 00:19:20.904 "zone_management": false, 00:19:20.904 "zone_append": false, 00:19:20.904 "compare": false, 00:19:20.904 "compare_and_write": false, 00:19:20.904 "abort": true, 00:19:20.904 "seek_hole": false, 00:19:20.904 "seek_data": false, 00:19:20.904 "copy": true, 00:19:20.904 "nvme_iov_md": false 00:19:20.904 }, 00:19:20.904 "memory_domains": [ 00:19:20.904 { 00:19:20.905 "dma_device_id": "system", 00:19:20.905 "dma_device_type": 1 00:19:20.905 }, 00:19:20.905 { 00:19:20.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.905 "dma_device_type": 2 00:19:20.905 } 00:19:20.905 ], 00:19:20.905 "driver_specific": {} 00:19:20.905 } 00:19:20.905 ] 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.905 "name": "Existed_Raid", 00:19:20.905 "uuid": "08d5b722-3d01-4c85-bad1-07438e1d62e4", 00:19:20.905 "strip_size_kb": 0, 00:19:20.905 "state": "configuring", 00:19:20.905 "raid_level": "raid1", 00:19:20.905 "superblock": true, 00:19:20.905 "num_base_bdevs": 2, 00:19:20.905 "num_base_bdevs_discovered": 1, 00:19:20.905 "num_base_bdevs_operational": 2, 00:19:20.905 "base_bdevs_list": [ 00:19:20.905 { 00:19:20.905 "name": "BaseBdev1", 00:19:20.905 "uuid": "3107559e-769a-42b5-ab98-612f9891ee63", 00:19:20.905 "is_configured": true, 00:19:20.905 "data_offset": 256, 00:19:20.905 "data_size": 7936 00:19:20.905 }, 00:19:20.905 { 00:19:20.905 "name": "BaseBdev2", 00:19:20.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.905 "is_configured": false, 00:19:20.905 "data_offset": 0, 00:19:20.905 "data_size": 0 00:19:20.905 } 00:19:20.905 ] 00:19:20.905 }' 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.905 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.473 [2024-11-26 19:06:12.563643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:21.473 [2024-11-26 19:06:12.563885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.473 [2024-11-26 19:06:12.571676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:21.473 [2024-11-26 19:06:12.574399] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:21.473 [2024-11-26 19:06:12.574621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.473 "name": "Existed_Raid", 00:19:21.473 "uuid": "1cf7f64e-6a68-4f13-b612-37a8303666c5", 00:19:21.473 "strip_size_kb": 0, 00:19:21.473 "state": "configuring", 00:19:21.473 "raid_level": "raid1", 00:19:21.473 "superblock": true, 00:19:21.473 "num_base_bdevs": 2, 00:19:21.473 "num_base_bdevs_discovered": 1, 00:19:21.473 "num_base_bdevs_operational": 2, 00:19:21.473 "base_bdevs_list": [ 00:19:21.473 { 00:19:21.473 "name": "BaseBdev1", 00:19:21.473 "uuid": "3107559e-769a-42b5-ab98-612f9891ee63", 00:19:21.473 "is_configured": true, 00:19:21.473 "data_offset": 256, 00:19:21.473 "data_size": 7936 00:19:21.473 }, 00:19:21.473 { 00:19:21.473 "name": "BaseBdev2", 00:19:21.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.473 "is_configured": false, 00:19:21.473 "data_offset": 0, 00:19:21.473 "data_size": 0 00:19:21.473 } 00:19:21.473 ] 00:19:21.473 }' 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.473 19:06:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.732 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:21.732 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.732 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.992 [2024-11-26 19:06:13.130261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.992 [2024-11-26 19:06:13.130722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:21.992 [2024-11-26 19:06:13.130748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:21.992 [2024-11-26 19:06:13.130850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:21.992 [2024-11-26 19:06:13.130975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:21.992 [2024-11-26 19:06:13.130996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:21.992 BaseBdev2 00:19:21.992 [2024-11-26 19:06:13.131080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.992 [ 00:19:21.992 { 00:19:21.992 "name": "BaseBdev2", 00:19:21.992 "aliases": [ 00:19:21.992 "af62161e-48ba-4d47-bc8a-4fb250776a87" 00:19:21.992 ], 00:19:21.992 "product_name": "Malloc disk", 00:19:21.992 "block_size": 4128, 00:19:21.992 "num_blocks": 8192, 00:19:21.992 "uuid": "af62161e-48ba-4d47-bc8a-4fb250776a87", 00:19:21.992 "md_size": 32, 00:19:21.992 "md_interleave": true, 00:19:21.992 "dif_type": 0, 00:19:21.992 "assigned_rate_limits": { 00:19:21.992 "rw_ios_per_sec": 0, 00:19:21.992 "rw_mbytes_per_sec": 0, 00:19:21.992 "r_mbytes_per_sec": 0, 00:19:21.992 "w_mbytes_per_sec": 0 00:19:21.992 }, 00:19:21.992 "claimed": true, 00:19:21.992 "claim_type": "exclusive_write", 00:19:21.992 "zoned": false, 00:19:21.992 "supported_io_types": { 00:19:21.992 "read": true, 00:19:21.992 "write": true, 00:19:21.992 "unmap": true, 00:19:21.992 "flush": true, 00:19:21.992 "reset": true, 00:19:21.992 "nvme_admin": false, 00:19:21.992 "nvme_io": false, 00:19:21.992 "nvme_io_md": false, 00:19:21.992 "write_zeroes": true, 00:19:21.992 "zcopy": true, 00:19:21.992 "get_zone_info": false, 00:19:21.992 "zone_management": false, 00:19:21.992 "zone_append": false, 00:19:21.992 "compare": false, 00:19:21.992 "compare_and_write": false, 00:19:21.992 "abort": true, 00:19:21.992 "seek_hole": false, 00:19:21.992 "seek_data": false, 00:19:21.992 "copy": true, 00:19:21.992 "nvme_iov_md": false 00:19:21.992 }, 00:19:21.992 "memory_domains": [ 00:19:21.992 { 00:19:21.992 "dma_device_id": "system", 00:19:21.992 "dma_device_type": 1 00:19:21.992 }, 00:19:21.992 { 00:19:21.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.992 "dma_device_type": 2 00:19:21.992 } 00:19:21.992 ], 00:19:21.992 "driver_specific": {} 00:19:21.992 } 00:19:21.992 ] 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.992 "name": "Existed_Raid", 00:19:21.992 "uuid": "1cf7f64e-6a68-4f13-b612-37a8303666c5", 00:19:21.992 "strip_size_kb": 0, 00:19:21.992 "state": "online", 00:19:21.992 "raid_level": "raid1", 00:19:21.992 "superblock": true, 00:19:21.992 "num_base_bdevs": 2, 00:19:21.992 "num_base_bdevs_discovered": 2, 00:19:21.992 "num_base_bdevs_operational": 2, 00:19:21.992 "base_bdevs_list": [ 00:19:21.992 { 00:19:21.992 "name": "BaseBdev1", 00:19:21.992 "uuid": "3107559e-769a-42b5-ab98-612f9891ee63", 00:19:21.992 "is_configured": true, 00:19:21.992 "data_offset": 256, 00:19:21.992 "data_size": 7936 00:19:21.992 }, 00:19:21.992 { 00:19:21.992 "name": "BaseBdev2", 00:19:21.992 "uuid": "af62161e-48ba-4d47-bc8a-4fb250776a87", 00:19:21.992 "is_configured": true, 00:19:21.992 "data_offset": 256, 00:19:21.992 "data_size": 7936 00:19:21.992 } 00:19:21.992 ] 00:19:21.992 }' 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.992 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.620 [2024-11-26 19:06:13.706862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.620 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:22.620 "name": "Existed_Raid", 00:19:22.620 "aliases": [ 00:19:22.620 "1cf7f64e-6a68-4f13-b612-37a8303666c5" 00:19:22.620 ], 00:19:22.620 "product_name": "Raid Volume", 00:19:22.620 "block_size": 4128, 00:19:22.620 "num_blocks": 7936, 00:19:22.620 "uuid": "1cf7f64e-6a68-4f13-b612-37a8303666c5", 00:19:22.620 "md_size": 32, 00:19:22.620 "md_interleave": true, 00:19:22.620 "dif_type": 0, 00:19:22.620 "assigned_rate_limits": { 00:19:22.620 "rw_ios_per_sec": 0, 00:19:22.620 "rw_mbytes_per_sec": 0, 00:19:22.620 "r_mbytes_per_sec": 0, 00:19:22.620 "w_mbytes_per_sec": 0 00:19:22.620 }, 00:19:22.620 "claimed": false, 00:19:22.620 "zoned": false, 00:19:22.620 "supported_io_types": { 00:19:22.620 "read": true, 00:19:22.620 "write": true, 00:19:22.620 "unmap": false, 00:19:22.620 "flush": false, 00:19:22.620 "reset": true, 00:19:22.620 "nvme_admin": false, 00:19:22.620 "nvme_io": false, 00:19:22.620 "nvme_io_md": false, 00:19:22.620 "write_zeroes": true, 00:19:22.620 "zcopy": false, 00:19:22.620 "get_zone_info": false, 00:19:22.620 "zone_management": false, 00:19:22.620 "zone_append": false, 00:19:22.620 "compare": false, 00:19:22.620 "compare_and_write": false, 00:19:22.620 "abort": false, 00:19:22.620 "seek_hole": false, 00:19:22.620 "seek_data": false, 00:19:22.620 "copy": false, 00:19:22.620 "nvme_iov_md": false 00:19:22.620 }, 00:19:22.620 "memory_domains": [ 00:19:22.620 { 00:19:22.620 "dma_device_id": "system", 00:19:22.620 "dma_device_type": 1 00:19:22.620 }, 00:19:22.620 { 00:19:22.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.620 "dma_device_type": 2 00:19:22.620 }, 00:19:22.620 { 00:19:22.620 "dma_device_id": "system", 00:19:22.620 "dma_device_type": 1 00:19:22.620 }, 00:19:22.620 { 00:19:22.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.620 "dma_device_type": 2 00:19:22.621 } 00:19:22.621 ], 00:19:22.621 "driver_specific": { 00:19:22.621 "raid": { 00:19:22.621 "uuid": "1cf7f64e-6a68-4f13-b612-37a8303666c5", 00:19:22.621 "strip_size_kb": 0, 00:19:22.621 "state": "online", 00:19:22.621 "raid_level": "raid1", 00:19:22.621 "superblock": true, 00:19:22.621 "num_base_bdevs": 2, 00:19:22.621 "num_base_bdevs_discovered": 2, 00:19:22.621 "num_base_bdevs_operational": 2, 00:19:22.621 "base_bdevs_list": [ 00:19:22.621 { 00:19:22.621 "name": "BaseBdev1", 00:19:22.621 "uuid": "3107559e-769a-42b5-ab98-612f9891ee63", 00:19:22.621 "is_configured": true, 00:19:22.621 "data_offset": 256, 00:19:22.621 "data_size": 7936 00:19:22.621 }, 00:19:22.621 { 00:19:22.621 "name": "BaseBdev2", 00:19:22.621 "uuid": "af62161e-48ba-4d47-bc8a-4fb250776a87", 00:19:22.621 "is_configured": true, 00:19:22.621 "data_offset": 256, 00:19:22.621 "data_size": 7936 00:19:22.621 } 00:19:22.621 ] 00:19:22.621 } 00:19:22.621 } 00:19:22.621 }' 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:22.621 BaseBdev2' 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.621 19:06:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.621 [2024-11-26 19:06:13.966562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.880 "name": "Existed_Raid", 00:19:22.880 "uuid": "1cf7f64e-6a68-4f13-b612-37a8303666c5", 00:19:22.880 "strip_size_kb": 0, 00:19:22.880 "state": "online", 00:19:22.880 "raid_level": "raid1", 00:19:22.880 "superblock": true, 00:19:22.880 "num_base_bdevs": 2, 00:19:22.880 "num_base_bdevs_discovered": 1, 00:19:22.880 "num_base_bdevs_operational": 1, 00:19:22.880 "base_bdevs_list": [ 00:19:22.880 { 00:19:22.880 "name": null, 00:19:22.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.880 "is_configured": false, 00:19:22.880 "data_offset": 0, 00:19:22.880 "data_size": 7936 00:19:22.880 }, 00:19:22.880 { 00:19:22.880 "name": "BaseBdev2", 00:19:22.880 "uuid": "af62161e-48ba-4d47-bc8a-4fb250776a87", 00:19:22.880 "is_configured": true, 00:19:22.880 "data_offset": 256, 00:19:22.880 "data_size": 7936 00:19:22.880 } 00:19:22.880 ] 00:19:22.880 }' 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.880 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.450 [2024-11-26 19:06:14.629720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:23.450 [2024-11-26 19:06:14.630071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.450 [2024-11-26 19:06:14.708995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.450 [2024-11-26 19:06:14.709207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.450 [2024-11-26 19:06:14.709243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88976 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88976 ']' 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88976 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88976 00:19:23.450 killing process with pid 88976 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88976' 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88976 00:19:23.450 [2024-11-26 19:06:14.799576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.450 19:06:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88976 00:19:23.450 [2024-11-26 19:06:14.814322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:24.828 19:06:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:24.828 00:19:24.828 real 0m5.470s 00:19:24.828 user 0m8.297s 00:19:24.828 sys 0m0.792s 00:19:24.828 ************************************ 00:19:24.828 END TEST raid_state_function_test_sb_md_interleaved 00:19:24.828 ************************************ 00:19:24.828 19:06:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.828 19:06:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.828 19:06:15 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:24.828 19:06:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:24.828 19:06:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.828 19:06:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:24.828 ************************************ 00:19:24.828 START TEST raid_superblock_test_md_interleaved 00:19:24.828 ************************************ 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:24.828 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:24.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89229 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89229 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89229 ']' 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.829 19:06:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.829 [2024-11-26 19:06:16.012332] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:19:24.829 [2024-11-26 19:06:16.012744] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89229 ] 00:19:25.087 [2024-11-26 19:06:16.208059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.087 [2024-11-26 19:06:16.389105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.346 [2024-11-26 19:06:16.648326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.346 [2024-11-26 19:06:16.648652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.914 malloc1 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.914 [2024-11-26 19:06:17.146843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:25.914 [2024-11-26 19:06:17.147115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.914 [2024-11-26 19:06:17.147212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:25.914 [2024-11-26 19:06:17.147458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.914 [2024-11-26 19:06:17.150522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.914 pt1 00:19:25.914 [2024-11-26 19:06:17.150742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:25.914 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.915 malloc2 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.915 [2024-11-26 19:06:17.207397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:25.915 [2024-11-26 19:06:17.207626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.915 [2024-11-26 19:06:17.207802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:25.915 [2024-11-26 19:06:17.207981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.915 pt2 00:19:25.915 [2024-11-26 19:06:17.211248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.915 [2024-11-26 19:06:17.211312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.915 [2024-11-26 19:06:17.215610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:25.915 [2024-11-26 19:06:17.218800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:25.915 [2024-11-26 19:06:17.219260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:25.915 [2024-11-26 19:06:17.219429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:25.915 [2024-11-26 19:06:17.219617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:25.915 [2024-11-26 19:06:17.219888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:25.915 [2024-11-26 19:06:17.220076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:25.915 [2024-11-26 19:06:17.220444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.915 "name": "raid_bdev1", 00:19:25.915 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:25.915 "strip_size_kb": 0, 00:19:25.915 "state": "online", 00:19:25.915 "raid_level": "raid1", 00:19:25.915 "superblock": true, 00:19:25.915 "num_base_bdevs": 2, 00:19:25.915 "num_base_bdevs_discovered": 2, 00:19:25.915 "num_base_bdevs_operational": 2, 00:19:25.915 "base_bdevs_list": [ 00:19:25.915 { 00:19:25.915 "name": "pt1", 00:19:25.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:25.915 "is_configured": true, 00:19:25.915 "data_offset": 256, 00:19:25.915 "data_size": 7936 00:19:25.915 }, 00:19:25.915 { 00:19:25.915 "name": "pt2", 00:19:25.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:25.915 "is_configured": true, 00:19:25.915 "data_offset": 256, 00:19:25.915 "data_size": 7936 00:19:25.915 } 00:19:25.915 ] 00:19:25.915 }' 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.915 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:26.482 [2024-11-26 19:06:17.753096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.482 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:26.482 "name": "raid_bdev1", 00:19:26.482 "aliases": [ 00:19:26.482 "db16148b-23a0-4a31-a408-310024bcae0f" 00:19:26.482 ], 00:19:26.482 "product_name": "Raid Volume", 00:19:26.482 "block_size": 4128, 00:19:26.482 "num_blocks": 7936, 00:19:26.482 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:26.482 "md_size": 32, 00:19:26.482 "md_interleave": true, 00:19:26.482 "dif_type": 0, 00:19:26.482 "assigned_rate_limits": { 00:19:26.482 "rw_ios_per_sec": 0, 00:19:26.482 "rw_mbytes_per_sec": 0, 00:19:26.482 "r_mbytes_per_sec": 0, 00:19:26.482 "w_mbytes_per_sec": 0 00:19:26.482 }, 00:19:26.482 "claimed": false, 00:19:26.482 "zoned": false, 00:19:26.482 "supported_io_types": { 00:19:26.483 "read": true, 00:19:26.483 "write": true, 00:19:26.483 "unmap": false, 00:19:26.483 "flush": false, 00:19:26.483 "reset": true, 00:19:26.483 "nvme_admin": false, 00:19:26.483 "nvme_io": false, 00:19:26.483 "nvme_io_md": false, 00:19:26.483 "write_zeroes": true, 00:19:26.483 "zcopy": false, 00:19:26.483 "get_zone_info": false, 00:19:26.483 "zone_management": false, 00:19:26.483 "zone_append": false, 00:19:26.483 "compare": false, 00:19:26.483 "compare_and_write": false, 00:19:26.483 "abort": false, 00:19:26.483 "seek_hole": false, 00:19:26.483 "seek_data": false, 00:19:26.483 "copy": false, 00:19:26.483 "nvme_iov_md": false 00:19:26.483 }, 00:19:26.483 "memory_domains": [ 00:19:26.483 { 00:19:26.483 "dma_device_id": "system", 00:19:26.483 "dma_device_type": 1 00:19:26.483 }, 00:19:26.483 { 00:19:26.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.483 "dma_device_type": 2 00:19:26.483 }, 00:19:26.483 { 00:19:26.483 "dma_device_id": "system", 00:19:26.483 "dma_device_type": 1 00:19:26.483 }, 00:19:26.483 { 00:19:26.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.483 "dma_device_type": 2 00:19:26.483 } 00:19:26.483 ], 00:19:26.483 "driver_specific": { 00:19:26.483 "raid": { 00:19:26.483 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:26.483 "strip_size_kb": 0, 00:19:26.483 "state": "online", 00:19:26.483 "raid_level": "raid1", 00:19:26.483 "superblock": true, 00:19:26.483 "num_base_bdevs": 2, 00:19:26.483 "num_base_bdevs_discovered": 2, 00:19:26.483 "num_base_bdevs_operational": 2, 00:19:26.483 "base_bdevs_list": [ 00:19:26.483 { 00:19:26.483 "name": "pt1", 00:19:26.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:26.483 "is_configured": true, 00:19:26.483 "data_offset": 256, 00:19:26.483 "data_size": 7936 00:19:26.483 }, 00:19:26.483 { 00:19:26.483 "name": "pt2", 00:19:26.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:26.483 "is_configured": true, 00:19:26.483 "data_offset": 256, 00:19:26.483 "data_size": 7936 00:19:26.483 } 00:19:26.483 ] 00:19:26.483 } 00:19:26.483 } 00:19:26.483 }' 00:19:26.483 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:26.741 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:26.741 pt2' 00:19:26.741 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.741 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:26.741 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:26.741 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.742 19:06:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.742 [2024-11-26 19:06:18.029057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=db16148b-23a0-4a31-a408-310024bcae0f 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z db16148b-23a0-4a31-a408-310024bcae0f ']' 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.742 [2024-11-26 19:06:18.080718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:26.742 [2024-11-26 19:06:18.080864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:26.742 [2024-11-26 19:06:18.081084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.742 [2024-11-26 19:06:18.081176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.742 [2024-11-26 19:06:18.081212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:26.742 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.001 [2024-11-26 19:06:18.220843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:27.001 [2024-11-26 19:06:18.223933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:27.001 [2024-11-26 19:06:18.224046] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:27.001 [2024-11-26 19:06:18.224129] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:27.001 [2024-11-26 19:06:18.224155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.001 [2024-11-26 19:06:18.224171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:27.001 request: 00:19:27.001 { 00:19:27.001 "name": "raid_bdev1", 00:19:27.001 "raid_level": "raid1", 00:19:27.001 "base_bdevs": [ 00:19:27.001 "malloc1", 00:19:27.001 "malloc2" 00:19:27.001 ], 00:19:27.001 "superblock": false, 00:19:27.001 "method": "bdev_raid_create", 00:19:27.001 "req_id": 1 00:19:27.001 } 00:19:27.001 Got JSON-RPC error response 00:19:27.001 response: 00:19:27.001 { 00:19:27.001 "code": -17, 00:19:27.001 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:27.001 } 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.001 [2024-11-26 19:06:18.292974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:27.001 [2024-11-26 19:06:18.293075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.001 [2024-11-26 19:06:18.293105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:27.001 [2024-11-26 19:06:18.293123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.001 [2024-11-26 19:06:18.296107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.001 pt1 00:19:27.001 [2024-11-26 19:06:18.296308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:27.001 [2024-11-26 19:06:18.296402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:27.001 [2024-11-26 19:06:18.296480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.001 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.002 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.002 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.002 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.002 "name": "raid_bdev1", 00:19:27.002 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:27.002 "strip_size_kb": 0, 00:19:27.002 "state": "configuring", 00:19:27.002 "raid_level": "raid1", 00:19:27.002 "superblock": true, 00:19:27.002 "num_base_bdevs": 2, 00:19:27.002 "num_base_bdevs_discovered": 1, 00:19:27.002 "num_base_bdevs_operational": 2, 00:19:27.002 "base_bdevs_list": [ 00:19:27.002 { 00:19:27.002 "name": "pt1", 00:19:27.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.002 "is_configured": true, 00:19:27.002 "data_offset": 256, 00:19:27.002 "data_size": 7936 00:19:27.002 }, 00:19:27.002 { 00:19:27.002 "name": null, 00:19:27.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.002 "is_configured": false, 00:19:27.002 "data_offset": 256, 00:19:27.002 "data_size": 7936 00:19:27.002 } 00:19:27.002 ] 00:19:27.002 }' 00:19:27.002 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.002 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.570 [2024-11-26 19:06:18.829130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:27.570 [2024-11-26 19:06:18.829362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.570 [2024-11-26 19:06:18.829526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:27.570 [2024-11-26 19:06:18.829653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.570 [2024-11-26 19:06:18.829916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.570 [2024-11-26 19:06:18.829969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:27.570 [2024-11-26 19:06:18.830048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:27.570 [2024-11-26 19:06:18.830085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:27.570 [2024-11-26 19:06:18.830244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:27.570 [2024-11-26 19:06:18.830278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:27.570 [2024-11-26 19:06:18.830409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:27.570 [2024-11-26 19:06:18.830507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:27.570 [2024-11-26 19:06:18.830521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:27.570 [2024-11-26 19:06:18.830620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.570 pt2 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.570 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.571 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.571 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.571 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.571 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.571 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.571 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.571 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.571 "name": "raid_bdev1", 00:19:27.571 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:27.571 "strip_size_kb": 0, 00:19:27.571 "state": "online", 00:19:27.571 "raid_level": "raid1", 00:19:27.571 "superblock": true, 00:19:27.571 "num_base_bdevs": 2, 00:19:27.571 "num_base_bdevs_discovered": 2, 00:19:27.571 "num_base_bdevs_operational": 2, 00:19:27.571 "base_bdevs_list": [ 00:19:27.571 { 00:19:27.571 "name": "pt1", 00:19:27.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.571 "is_configured": true, 00:19:27.571 "data_offset": 256, 00:19:27.571 "data_size": 7936 00:19:27.571 }, 00:19:27.571 { 00:19:27.571 "name": "pt2", 00:19:27.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.571 "is_configured": true, 00:19:27.571 "data_offset": 256, 00:19:27.571 "data_size": 7936 00:19:27.571 } 00:19:27.571 ] 00:19:27.571 }' 00:19:27.571 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.571 19:06:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.175 [2024-11-26 19:06:19.369680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.175 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:28.175 "name": "raid_bdev1", 00:19:28.175 "aliases": [ 00:19:28.175 "db16148b-23a0-4a31-a408-310024bcae0f" 00:19:28.175 ], 00:19:28.175 "product_name": "Raid Volume", 00:19:28.175 "block_size": 4128, 00:19:28.175 "num_blocks": 7936, 00:19:28.175 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:28.175 "md_size": 32, 00:19:28.175 "md_interleave": true, 00:19:28.175 "dif_type": 0, 00:19:28.175 "assigned_rate_limits": { 00:19:28.175 "rw_ios_per_sec": 0, 00:19:28.175 "rw_mbytes_per_sec": 0, 00:19:28.175 "r_mbytes_per_sec": 0, 00:19:28.175 "w_mbytes_per_sec": 0 00:19:28.175 }, 00:19:28.175 "claimed": false, 00:19:28.175 "zoned": false, 00:19:28.175 "supported_io_types": { 00:19:28.175 "read": true, 00:19:28.175 "write": true, 00:19:28.175 "unmap": false, 00:19:28.175 "flush": false, 00:19:28.175 "reset": true, 00:19:28.175 "nvme_admin": false, 00:19:28.175 "nvme_io": false, 00:19:28.175 "nvme_io_md": false, 00:19:28.175 "write_zeroes": true, 00:19:28.175 "zcopy": false, 00:19:28.175 "get_zone_info": false, 00:19:28.175 "zone_management": false, 00:19:28.175 "zone_append": false, 00:19:28.175 "compare": false, 00:19:28.175 "compare_and_write": false, 00:19:28.175 "abort": false, 00:19:28.175 "seek_hole": false, 00:19:28.175 "seek_data": false, 00:19:28.175 "copy": false, 00:19:28.175 "nvme_iov_md": false 00:19:28.175 }, 00:19:28.175 "memory_domains": [ 00:19:28.175 { 00:19:28.175 "dma_device_id": "system", 00:19:28.175 "dma_device_type": 1 00:19:28.175 }, 00:19:28.175 { 00:19:28.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.175 "dma_device_type": 2 00:19:28.175 }, 00:19:28.175 { 00:19:28.175 "dma_device_id": "system", 00:19:28.175 "dma_device_type": 1 00:19:28.175 }, 00:19:28.175 { 00:19:28.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.175 "dma_device_type": 2 00:19:28.175 } 00:19:28.175 ], 00:19:28.175 "driver_specific": { 00:19:28.175 "raid": { 00:19:28.175 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:28.175 "strip_size_kb": 0, 00:19:28.176 "state": "online", 00:19:28.176 "raid_level": "raid1", 00:19:28.176 "superblock": true, 00:19:28.176 "num_base_bdevs": 2, 00:19:28.176 "num_base_bdevs_discovered": 2, 00:19:28.176 "num_base_bdevs_operational": 2, 00:19:28.176 "base_bdevs_list": [ 00:19:28.176 { 00:19:28.176 "name": "pt1", 00:19:28.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:28.176 "is_configured": true, 00:19:28.176 "data_offset": 256, 00:19:28.176 "data_size": 7936 00:19:28.176 }, 00:19:28.176 { 00:19:28.176 "name": "pt2", 00:19:28.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.176 "is_configured": true, 00:19:28.176 "data_offset": 256, 00:19:28.176 "data_size": 7936 00:19:28.176 } 00:19:28.176 ] 00:19:28.176 } 00:19:28.176 } 00:19:28.176 }' 00:19:28.176 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:28.176 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:28.176 pt2' 00:19:28.176 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:28.462 [2024-11-26 19:06:19.673827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' db16148b-23a0-4a31-a408-310024bcae0f '!=' db16148b-23a0-4a31-a408-310024bcae0f ']' 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.462 [2024-11-26 19:06:19.717576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.462 "name": "raid_bdev1", 00:19:28.462 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:28.462 "strip_size_kb": 0, 00:19:28.462 "state": "online", 00:19:28.462 "raid_level": "raid1", 00:19:28.462 "superblock": true, 00:19:28.462 "num_base_bdevs": 2, 00:19:28.462 "num_base_bdevs_discovered": 1, 00:19:28.462 "num_base_bdevs_operational": 1, 00:19:28.462 "base_bdevs_list": [ 00:19:28.462 { 00:19:28.462 "name": null, 00:19:28.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.462 "is_configured": false, 00:19:28.462 "data_offset": 0, 00:19:28.462 "data_size": 7936 00:19:28.462 }, 00:19:28.462 { 00:19:28.462 "name": "pt2", 00:19:28.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.462 "is_configured": true, 00:19:28.462 "data_offset": 256, 00:19:28.462 "data_size": 7936 00:19:28.462 } 00:19:28.462 ] 00:19:28.462 }' 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.462 19:06:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.029 [2024-11-26 19:06:20.265631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.029 [2024-11-26 19:06:20.265682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.029 [2024-11-26 19:06:20.265825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.029 [2024-11-26 19:06:20.265908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.029 [2024-11-26 19:06:20.265928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.029 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.029 [2024-11-26 19:06:20.341625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:29.029 [2024-11-26 19:06:20.341838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.030 [2024-11-26 19:06:20.341922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:29.030 [2024-11-26 19:06:20.342093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.030 [2024-11-26 19:06:20.344847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.030 [2024-11-26 19:06:20.345054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:29.030 [2024-11-26 19:06:20.345244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:29.030 [2024-11-26 19:06:20.345451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:29.030 [2024-11-26 19:06:20.345599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:29.030 [2024-11-26 19:06:20.345749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:29.030 [2024-11-26 19:06:20.345922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:29.030 [2024-11-26 19:06:20.346139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:29.030 [2024-11-26 19:06:20.346267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:29.030 [2024-11-26 19:06:20.346521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.030 pt2 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.030 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.288 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.288 "name": "raid_bdev1", 00:19:29.288 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:29.288 "strip_size_kb": 0, 00:19:29.288 "state": "online", 00:19:29.288 "raid_level": "raid1", 00:19:29.288 "superblock": true, 00:19:29.288 "num_base_bdevs": 2, 00:19:29.288 "num_base_bdevs_discovered": 1, 00:19:29.288 "num_base_bdevs_operational": 1, 00:19:29.288 "base_bdevs_list": [ 00:19:29.288 { 00:19:29.288 "name": null, 00:19:29.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.288 "is_configured": false, 00:19:29.288 "data_offset": 256, 00:19:29.288 "data_size": 7936 00:19:29.288 }, 00:19:29.288 { 00:19:29.288 "name": "pt2", 00:19:29.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.288 "is_configured": true, 00:19:29.288 "data_offset": 256, 00:19:29.288 "data_size": 7936 00:19:29.288 } 00:19:29.288 ] 00:19:29.288 }' 00:19:29.288 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.288 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.548 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:29.548 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.548 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.808 [2024-11-26 19:06:20.917988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.808 [2024-11-26 19:06:20.918175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.808 [2024-11-26 19:06:20.918295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.808 [2024-11-26 19:06:20.918372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.808 [2024-11-26 19:06:20.918389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.808 [2024-11-26 19:06:20.982012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:29.808 [2024-11-26 19:06:20.982261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.808 [2024-11-26 19:06:20.982457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:29.808 [2024-11-26 19:06:20.982627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.808 [2024-11-26 19:06:20.985599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.808 [2024-11-26 19:06:20.985645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:29.808 [2024-11-26 19:06:20.985727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:29.808 [2024-11-26 19:06:20.985788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:29.808 [2024-11-26 19:06:20.985939] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:29.808 [2024-11-26 19:06:20.985958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.808 [2024-11-26 19:06:20.985984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:29.808 [2024-11-26 19:06:20.986070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:29.808 [2024-11-26 19:06:20.986224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:29.808 [2024-11-26 19:06:20.986241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:29.808 [2024-11-26 19:06:20.986328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:29.808 pt1 00:19:29.808 [2024-11-26 19:06:20.986411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:29.808 [2024-11-26 19:06:20.986436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:29.808 [2024-11-26 19:06:20.986535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.808 19:06:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.808 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.808 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.808 "name": "raid_bdev1", 00:19:29.808 "uuid": "db16148b-23a0-4a31-a408-310024bcae0f", 00:19:29.808 "strip_size_kb": 0, 00:19:29.808 "state": "online", 00:19:29.808 "raid_level": "raid1", 00:19:29.808 "superblock": true, 00:19:29.808 "num_base_bdevs": 2, 00:19:29.808 "num_base_bdevs_discovered": 1, 00:19:29.808 "num_base_bdevs_operational": 1, 00:19:29.808 "base_bdevs_list": [ 00:19:29.808 { 00:19:29.808 "name": null, 00:19:29.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.808 "is_configured": false, 00:19:29.808 "data_offset": 256, 00:19:29.808 "data_size": 7936 00:19:29.808 }, 00:19:29.808 { 00:19:29.808 "name": "pt2", 00:19:29.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.808 "is_configured": true, 00:19:29.808 "data_offset": 256, 00:19:29.808 "data_size": 7936 00:19:29.808 } 00:19:29.808 ] 00:19:29.809 }' 00:19:29.809 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.809 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:30.377 [2024-11-26 19:06:21.578613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' db16148b-23a0-4a31-a408-310024bcae0f '!=' db16148b-23a0-4a31-a408-310024bcae0f ']' 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89229 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89229 ']' 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89229 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89229 00:19:30.377 killing process with pid 89229 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89229' 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89229 00:19:30.377 [2024-11-26 19:06:21.663554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.377 19:06:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89229 00:19:30.377 [2024-11-26 19:06:21.663752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.377 [2024-11-26 19:06:21.663836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.377 [2024-11-26 19:06:21.663872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:30.636 [2024-11-26 19:06:21.844907] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:31.574 ************************************ 00:19:31.574 END TEST raid_superblock_test_md_interleaved 00:19:31.574 ************************************ 00:19:31.574 19:06:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:31.574 00:19:31.574 real 0m6.958s 00:19:31.574 user 0m11.106s 00:19:31.574 sys 0m1.011s 00:19:31.574 19:06:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.574 19:06:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.574 19:06:22 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:31.574 19:06:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:31.574 19:06:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.574 19:06:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.574 ************************************ 00:19:31.574 START TEST raid_rebuild_test_sb_md_interleaved 00:19:31.574 ************************************ 00:19:31.574 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:31.574 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:31.574 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:31.574 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:31.574 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:31.574 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:31.574 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:31.574 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89563 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89563 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89563 ']' 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.575 19:06:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.834 [2024-11-26 19:06:23.027690] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:19:31.834 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:31.834 Zero copy mechanism will not be used. 00:19:31.834 [2024-11-26 19:06:23.027928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89563 ] 00:19:32.093 [2024-11-26 19:06:23.210604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.093 [2024-11-26 19:06:23.355558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.352 [2024-11-26 19:06:23.581236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.352 [2024-11-26 19:06:23.581289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.929 BaseBdev1_malloc 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.929 [2024-11-26 19:06:24.105633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:32.929 [2024-11-26 19:06:24.105851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.929 [2024-11-26 19:06:24.105948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:32.929 [2024-11-26 19:06:24.106190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.929 [2024-11-26 19:06:24.109052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.929 BaseBdev1 00:19:32.929 [2024-11-26 19:06:24.109258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.929 BaseBdev2_malloc 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.929 [2024-11-26 19:06:24.161913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:32.929 [2024-11-26 19:06:24.162153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.929 [2024-11-26 19:06:24.162228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:32.929 [2024-11-26 19:06:24.162358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.929 [2024-11-26 19:06:24.165143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.929 [2024-11-26 19:06:24.165300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:32.929 BaseBdev2 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.929 spare_malloc 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.929 spare_delay 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.929 [2024-11-26 19:06:24.245470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:32.929 [2024-11-26 19:06:24.245751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.929 [2024-11-26 19:06:24.245802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:32.929 [2024-11-26 19:06:24.245827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.929 spare 00:19:32.929 [2024-11-26 19:06:24.249276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.929 [2024-11-26 19:06:24.249346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.929 [2024-11-26 19:06:24.253606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.929 [2024-11-26 19:06:24.257176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.929 [2024-11-26 19:06:24.257675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:32.929 [2024-11-26 19:06:24.257846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:32.929 [2024-11-26 19:06:24.258142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:32.929 [2024-11-26 19:06:24.258355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:32.929 [2024-11-26 19:06:24.258375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:32.929 [2024-11-26 19:06:24.258505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.929 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.930 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.930 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.202 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.202 "name": "raid_bdev1", 00:19:33.202 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:33.202 "strip_size_kb": 0, 00:19:33.202 "state": "online", 00:19:33.202 "raid_level": "raid1", 00:19:33.202 "superblock": true, 00:19:33.202 "num_base_bdevs": 2, 00:19:33.202 "num_base_bdevs_discovered": 2, 00:19:33.202 "num_base_bdevs_operational": 2, 00:19:33.202 "base_bdevs_list": [ 00:19:33.202 { 00:19:33.202 "name": "BaseBdev1", 00:19:33.202 "uuid": "644ed063-4f4d-5359-9d2c-6c27a15fc70a", 00:19:33.202 "is_configured": true, 00:19:33.202 "data_offset": 256, 00:19:33.202 "data_size": 7936 00:19:33.202 }, 00:19:33.202 { 00:19:33.202 "name": "BaseBdev2", 00:19:33.202 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:33.202 "is_configured": true, 00:19:33.202 "data_offset": 256, 00:19:33.202 "data_size": 7936 00:19:33.202 } 00:19:33.202 ] 00:19:33.202 }' 00:19:33.202 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.202 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.461 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:33.461 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:33.461 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.461 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.461 [2024-11-26 19:06:24.794213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.461 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.721 [2024-11-26 19:06:24.897836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.721 "name": "raid_bdev1", 00:19:33.721 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:33.721 "strip_size_kb": 0, 00:19:33.721 "state": "online", 00:19:33.721 "raid_level": "raid1", 00:19:33.721 "superblock": true, 00:19:33.721 "num_base_bdevs": 2, 00:19:33.721 "num_base_bdevs_discovered": 1, 00:19:33.721 "num_base_bdevs_operational": 1, 00:19:33.721 "base_bdevs_list": [ 00:19:33.721 { 00:19:33.721 "name": null, 00:19:33.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.721 "is_configured": false, 00:19:33.721 "data_offset": 0, 00:19:33.721 "data_size": 7936 00:19:33.721 }, 00:19:33.721 { 00:19:33.721 "name": "BaseBdev2", 00:19:33.721 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:33.721 "is_configured": true, 00:19:33.721 "data_offset": 256, 00:19:33.721 "data_size": 7936 00:19:33.721 } 00:19:33.721 ] 00:19:33.721 }' 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.721 19:06:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.290 19:06:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.290 19:06:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.290 19:06:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.290 [2024-11-26 19:06:25.418209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.290 [2024-11-26 19:06:25.437159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:34.290 19:06:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.290 19:06:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:34.290 [2024-11-26 19:06:25.439851] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.228 "name": "raid_bdev1", 00:19:35.228 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:35.228 "strip_size_kb": 0, 00:19:35.228 "state": "online", 00:19:35.228 "raid_level": "raid1", 00:19:35.228 "superblock": true, 00:19:35.228 "num_base_bdevs": 2, 00:19:35.228 "num_base_bdevs_discovered": 2, 00:19:35.228 "num_base_bdevs_operational": 2, 00:19:35.228 "process": { 00:19:35.228 "type": "rebuild", 00:19:35.228 "target": "spare", 00:19:35.228 "progress": { 00:19:35.228 "blocks": 2560, 00:19:35.228 "percent": 32 00:19:35.228 } 00:19:35.228 }, 00:19:35.228 "base_bdevs_list": [ 00:19:35.228 { 00:19:35.228 "name": "spare", 00:19:35.228 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:35.228 "is_configured": true, 00:19:35.228 "data_offset": 256, 00:19:35.228 "data_size": 7936 00:19:35.228 }, 00:19:35.228 { 00:19:35.228 "name": "BaseBdev2", 00:19:35.228 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:35.228 "is_configured": true, 00:19:35.228 "data_offset": 256, 00:19:35.228 "data_size": 7936 00:19:35.228 } 00:19:35.228 ] 00:19:35.228 }' 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.228 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.487 [2024-11-26 19:06:26.621521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:35.487 [2024-11-26 19:06:26.649596] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:35.487 [2024-11-26 19:06:26.649820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.487 [2024-11-26 19:06:26.649853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:35.487 [2024-11-26 19:06:26.649924] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.487 "name": "raid_bdev1", 00:19:35.487 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:35.487 "strip_size_kb": 0, 00:19:35.487 "state": "online", 00:19:35.487 "raid_level": "raid1", 00:19:35.487 "superblock": true, 00:19:35.487 "num_base_bdevs": 2, 00:19:35.487 "num_base_bdevs_discovered": 1, 00:19:35.487 "num_base_bdevs_operational": 1, 00:19:35.487 "base_bdevs_list": [ 00:19:35.487 { 00:19:35.487 "name": null, 00:19:35.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.487 "is_configured": false, 00:19:35.487 "data_offset": 0, 00:19:35.487 "data_size": 7936 00:19:35.487 }, 00:19:35.487 { 00:19:35.487 "name": "BaseBdev2", 00:19:35.487 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:35.487 "is_configured": true, 00:19:35.487 "data_offset": 256, 00:19:35.487 "data_size": 7936 00:19:35.487 } 00:19:35.487 ] 00:19:35.487 }' 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.487 19:06:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.055 "name": "raid_bdev1", 00:19:36.055 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:36.055 "strip_size_kb": 0, 00:19:36.055 "state": "online", 00:19:36.055 "raid_level": "raid1", 00:19:36.055 "superblock": true, 00:19:36.055 "num_base_bdevs": 2, 00:19:36.055 "num_base_bdevs_discovered": 1, 00:19:36.055 "num_base_bdevs_operational": 1, 00:19:36.055 "base_bdevs_list": [ 00:19:36.055 { 00:19:36.055 "name": null, 00:19:36.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.055 "is_configured": false, 00:19:36.055 "data_offset": 0, 00:19:36.055 "data_size": 7936 00:19:36.055 }, 00:19:36.055 { 00:19:36.055 "name": "BaseBdev2", 00:19:36.055 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:36.055 "is_configured": true, 00:19:36.055 "data_offset": 256, 00:19:36.055 "data_size": 7936 00:19:36.055 } 00:19:36.055 ] 00:19:36.055 }' 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.055 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.055 [2024-11-26 19:06:27.389417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:36.055 [2024-11-26 19:06:27.407296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:36.056 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.056 19:06:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:36.056 [2024-11-26 19:06:27.410317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.433 "name": "raid_bdev1", 00:19:37.433 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:37.433 "strip_size_kb": 0, 00:19:37.433 "state": "online", 00:19:37.433 "raid_level": "raid1", 00:19:37.433 "superblock": true, 00:19:37.433 "num_base_bdevs": 2, 00:19:37.433 "num_base_bdevs_discovered": 2, 00:19:37.433 "num_base_bdevs_operational": 2, 00:19:37.433 "process": { 00:19:37.433 "type": "rebuild", 00:19:37.433 "target": "spare", 00:19:37.433 "progress": { 00:19:37.433 "blocks": 2560, 00:19:37.433 "percent": 32 00:19:37.433 } 00:19:37.433 }, 00:19:37.433 "base_bdevs_list": [ 00:19:37.433 { 00:19:37.433 "name": "spare", 00:19:37.433 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:37.433 "is_configured": true, 00:19:37.433 "data_offset": 256, 00:19:37.433 "data_size": 7936 00:19:37.433 }, 00:19:37.433 { 00:19:37.433 "name": "BaseBdev2", 00:19:37.433 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:37.433 "is_configured": true, 00:19:37.433 "data_offset": 256, 00:19:37.433 "data_size": 7936 00:19:37.433 } 00:19:37.433 ] 00:19:37.433 }' 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:37.433 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:37.433 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=807 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.434 "name": "raid_bdev1", 00:19:37.434 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:37.434 "strip_size_kb": 0, 00:19:37.434 "state": "online", 00:19:37.434 "raid_level": "raid1", 00:19:37.434 "superblock": true, 00:19:37.434 "num_base_bdevs": 2, 00:19:37.434 "num_base_bdevs_discovered": 2, 00:19:37.434 "num_base_bdevs_operational": 2, 00:19:37.434 "process": { 00:19:37.434 "type": "rebuild", 00:19:37.434 "target": "spare", 00:19:37.434 "progress": { 00:19:37.434 "blocks": 2816, 00:19:37.434 "percent": 35 00:19:37.434 } 00:19:37.434 }, 00:19:37.434 "base_bdevs_list": [ 00:19:37.434 { 00:19:37.434 "name": "spare", 00:19:37.434 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:37.434 "is_configured": true, 00:19:37.434 "data_offset": 256, 00:19:37.434 "data_size": 7936 00:19:37.434 }, 00:19:37.434 { 00:19:37.434 "name": "BaseBdev2", 00:19:37.434 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:37.434 "is_configured": true, 00:19:37.434 "data_offset": 256, 00:19:37.434 "data_size": 7936 00:19:37.434 } 00:19:37.434 ] 00:19:37.434 }' 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.434 19:06:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.819 "name": "raid_bdev1", 00:19:38.819 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:38.819 "strip_size_kb": 0, 00:19:38.819 "state": "online", 00:19:38.819 "raid_level": "raid1", 00:19:38.819 "superblock": true, 00:19:38.819 "num_base_bdevs": 2, 00:19:38.819 "num_base_bdevs_discovered": 2, 00:19:38.819 "num_base_bdevs_operational": 2, 00:19:38.819 "process": { 00:19:38.819 "type": "rebuild", 00:19:38.819 "target": "spare", 00:19:38.819 "progress": { 00:19:38.819 "blocks": 5888, 00:19:38.819 "percent": 74 00:19:38.819 } 00:19:38.819 }, 00:19:38.819 "base_bdevs_list": [ 00:19:38.819 { 00:19:38.819 "name": "spare", 00:19:38.819 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:38.819 "is_configured": true, 00:19:38.819 "data_offset": 256, 00:19:38.819 "data_size": 7936 00:19:38.819 }, 00:19:38.819 { 00:19:38.819 "name": "BaseBdev2", 00:19:38.819 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:38.819 "is_configured": true, 00:19:38.819 "data_offset": 256, 00:19:38.819 "data_size": 7936 00:19:38.819 } 00:19:38.819 ] 00:19:38.819 }' 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.819 19:06:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:39.386 [2024-11-26 19:06:30.535723] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:39.386 [2024-11-26 19:06:30.535839] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:39.386 [2024-11-26 19:06:30.536015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.675 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.675 "name": "raid_bdev1", 00:19:39.675 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:39.675 "strip_size_kb": 0, 00:19:39.675 "state": "online", 00:19:39.675 "raid_level": "raid1", 00:19:39.675 "superblock": true, 00:19:39.675 "num_base_bdevs": 2, 00:19:39.675 "num_base_bdevs_discovered": 2, 00:19:39.675 "num_base_bdevs_operational": 2, 00:19:39.675 "base_bdevs_list": [ 00:19:39.675 { 00:19:39.675 "name": "spare", 00:19:39.675 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:39.675 "is_configured": true, 00:19:39.675 "data_offset": 256, 00:19:39.675 "data_size": 7936 00:19:39.675 }, 00:19:39.675 { 00:19:39.676 "name": "BaseBdev2", 00:19:39.676 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:39.676 "is_configured": true, 00:19:39.676 "data_offset": 256, 00:19:39.676 "data_size": 7936 00:19:39.676 } 00:19:39.676 ] 00:19:39.676 }' 00:19:39.676 19:06:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.676 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:39.676 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.934 "name": "raid_bdev1", 00:19:39.934 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:39.934 "strip_size_kb": 0, 00:19:39.934 "state": "online", 00:19:39.934 "raid_level": "raid1", 00:19:39.934 "superblock": true, 00:19:39.934 "num_base_bdevs": 2, 00:19:39.934 "num_base_bdevs_discovered": 2, 00:19:39.934 "num_base_bdevs_operational": 2, 00:19:39.934 "base_bdevs_list": [ 00:19:39.934 { 00:19:39.934 "name": "spare", 00:19:39.934 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:39.934 "is_configured": true, 00:19:39.934 "data_offset": 256, 00:19:39.934 "data_size": 7936 00:19:39.934 }, 00:19:39.934 { 00:19:39.934 "name": "BaseBdev2", 00:19:39.934 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:39.934 "is_configured": true, 00:19:39.934 "data_offset": 256, 00:19:39.934 "data_size": 7936 00:19:39.934 } 00:19:39.934 ] 00:19:39.934 }' 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.934 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.193 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.193 "name": "raid_bdev1", 00:19:40.193 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:40.193 "strip_size_kb": 0, 00:19:40.193 "state": "online", 00:19:40.193 "raid_level": "raid1", 00:19:40.193 "superblock": true, 00:19:40.193 "num_base_bdevs": 2, 00:19:40.193 "num_base_bdevs_discovered": 2, 00:19:40.193 "num_base_bdevs_operational": 2, 00:19:40.193 "base_bdevs_list": [ 00:19:40.193 { 00:19:40.193 "name": "spare", 00:19:40.193 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:40.193 "is_configured": true, 00:19:40.193 "data_offset": 256, 00:19:40.193 "data_size": 7936 00:19:40.193 }, 00:19:40.193 { 00:19:40.193 "name": "BaseBdev2", 00:19:40.193 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:40.193 "is_configured": true, 00:19:40.193 "data_offset": 256, 00:19:40.193 "data_size": 7936 00:19:40.193 } 00:19:40.193 ] 00:19:40.193 }' 00:19:40.193 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.193 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.452 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:40.452 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.452 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.452 [2024-11-26 19:06:31.777703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.452 [2024-11-26 19:06:31.777931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.452 [2024-11-26 19:06:31.778190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.452 [2024-11-26 19:06:31.778465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.452 [2024-11-26 19:06:31.778523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:40.452 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.452 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.452 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:40.452 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.452 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.452 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.711 [2024-11-26 19:06:31.853683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:40.711 [2024-11-26 19:06:31.853937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.711 [2024-11-26 19:06:31.854128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:40.711 [2024-11-26 19:06:31.854155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.711 [2024-11-26 19:06:31.857343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.711 [2024-11-26 19:06:31.857386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:40.711 [2024-11-26 19:06:31.857481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:40.711 [2024-11-26 19:06:31.857561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:40.711 [2024-11-26 19:06:31.857731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.711 spare 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.711 [2024-11-26 19:06:31.957902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:40.711 [2024-11-26 19:06:31.958170] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:40.711 [2024-11-26 19:06:31.958356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:40.711 [2024-11-26 19:06:31.958709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:40.711 [2024-11-26 19:06:31.958735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:40.711 [2024-11-26 19:06:31.958911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.711 19:06:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.711 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.711 "name": "raid_bdev1", 00:19:40.711 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:40.711 "strip_size_kb": 0, 00:19:40.711 "state": "online", 00:19:40.711 "raid_level": "raid1", 00:19:40.711 "superblock": true, 00:19:40.711 "num_base_bdevs": 2, 00:19:40.711 "num_base_bdevs_discovered": 2, 00:19:40.711 "num_base_bdevs_operational": 2, 00:19:40.711 "base_bdevs_list": [ 00:19:40.711 { 00:19:40.711 "name": "spare", 00:19:40.711 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:40.711 "is_configured": true, 00:19:40.711 "data_offset": 256, 00:19:40.711 "data_size": 7936 00:19:40.711 }, 00:19:40.711 { 00:19:40.711 "name": "BaseBdev2", 00:19:40.711 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:40.711 "is_configured": true, 00:19:40.711 "data_offset": 256, 00:19:40.711 "data_size": 7936 00:19:40.711 } 00:19:40.711 ] 00:19:40.711 }' 00:19:40.711 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.711 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.278 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.278 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.278 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.279 "name": "raid_bdev1", 00:19:41.279 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:41.279 "strip_size_kb": 0, 00:19:41.279 "state": "online", 00:19:41.279 "raid_level": "raid1", 00:19:41.279 "superblock": true, 00:19:41.279 "num_base_bdevs": 2, 00:19:41.279 "num_base_bdevs_discovered": 2, 00:19:41.279 "num_base_bdevs_operational": 2, 00:19:41.279 "base_bdevs_list": [ 00:19:41.279 { 00:19:41.279 "name": "spare", 00:19:41.279 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:41.279 "is_configured": true, 00:19:41.279 "data_offset": 256, 00:19:41.279 "data_size": 7936 00:19:41.279 }, 00:19:41.279 { 00:19:41.279 "name": "BaseBdev2", 00:19:41.279 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:41.279 "is_configured": true, 00:19:41.279 "data_offset": 256, 00:19:41.279 "data_size": 7936 00:19:41.279 } 00:19:41.279 ] 00:19:41.279 }' 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.279 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.538 [2024-11-26 19:06:32.694462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.538 "name": "raid_bdev1", 00:19:41.538 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:41.538 "strip_size_kb": 0, 00:19:41.538 "state": "online", 00:19:41.538 "raid_level": "raid1", 00:19:41.538 "superblock": true, 00:19:41.538 "num_base_bdevs": 2, 00:19:41.538 "num_base_bdevs_discovered": 1, 00:19:41.538 "num_base_bdevs_operational": 1, 00:19:41.538 "base_bdevs_list": [ 00:19:41.538 { 00:19:41.538 "name": null, 00:19:41.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.538 "is_configured": false, 00:19:41.538 "data_offset": 0, 00:19:41.538 "data_size": 7936 00:19:41.538 }, 00:19:41.538 { 00:19:41.538 "name": "BaseBdev2", 00:19:41.538 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:41.538 "is_configured": true, 00:19:41.538 "data_offset": 256, 00:19:41.538 "data_size": 7936 00:19:41.538 } 00:19:41.538 ] 00:19:41.538 }' 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.538 19:06:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.105 19:06:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:42.105 19:06:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.105 19:06:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.105 [2024-11-26 19:06:33.238651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:42.105 [2024-11-26 19:06:33.238995] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:42.105 [2024-11-26 19:06:33.239021] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:42.105 [2024-11-26 19:06:33.239089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:42.105 [2024-11-26 19:06:33.254859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:42.105 19:06:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.105 19:06:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:42.105 [2024-11-26 19:06:33.257593] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.041 "name": "raid_bdev1", 00:19:43.041 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:43.041 "strip_size_kb": 0, 00:19:43.041 "state": "online", 00:19:43.041 "raid_level": "raid1", 00:19:43.041 "superblock": true, 00:19:43.041 "num_base_bdevs": 2, 00:19:43.041 "num_base_bdevs_discovered": 2, 00:19:43.041 "num_base_bdevs_operational": 2, 00:19:43.041 "process": { 00:19:43.041 "type": "rebuild", 00:19:43.041 "target": "spare", 00:19:43.041 "progress": { 00:19:43.041 "blocks": 2560, 00:19:43.041 "percent": 32 00:19:43.041 } 00:19:43.041 }, 00:19:43.041 "base_bdevs_list": [ 00:19:43.041 { 00:19:43.041 "name": "spare", 00:19:43.041 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:43.041 "is_configured": true, 00:19:43.041 "data_offset": 256, 00:19:43.041 "data_size": 7936 00:19:43.041 }, 00:19:43.041 { 00:19:43.041 "name": "BaseBdev2", 00:19:43.041 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:43.041 "is_configured": true, 00:19:43.041 "data_offset": 256, 00:19:43.041 "data_size": 7936 00:19:43.041 } 00:19:43.041 ] 00:19:43.041 }' 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.041 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.300 [2024-11-26 19:06:34.432037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.300 [2024-11-26 19:06:34.467981] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:43.300 [2024-11-26 19:06:34.468255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.300 [2024-11-26 19:06:34.468285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.300 [2024-11-26 19:06:34.468301] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.300 "name": "raid_bdev1", 00:19:43.300 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:43.300 "strip_size_kb": 0, 00:19:43.300 "state": "online", 00:19:43.300 "raid_level": "raid1", 00:19:43.300 "superblock": true, 00:19:43.300 "num_base_bdevs": 2, 00:19:43.300 "num_base_bdevs_discovered": 1, 00:19:43.300 "num_base_bdevs_operational": 1, 00:19:43.300 "base_bdevs_list": [ 00:19:43.300 { 00:19:43.300 "name": null, 00:19:43.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.300 "is_configured": false, 00:19:43.300 "data_offset": 0, 00:19:43.300 "data_size": 7936 00:19:43.300 }, 00:19:43.300 { 00:19:43.300 "name": "BaseBdev2", 00:19:43.300 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:43.300 "is_configured": true, 00:19:43.300 "data_offset": 256, 00:19:43.300 "data_size": 7936 00:19:43.300 } 00:19:43.300 ] 00:19:43.300 }' 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.300 19:06:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.867 19:06:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:43.867 19:06:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.867 19:06:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.867 [2024-11-26 19:06:35.026081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:43.867 [2024-11-26 19:06:35.026170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.867 [2024-11-26 19:06:35.026225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:43.867 [2024-11-26 19:06:35.026281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.867 [2024-11-26 19:06:35.026554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.867 [2024-11-26 19:06:35.026586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:43.867 [2024-11-26 19:06:35.026669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:43.867 [2024-11-26 19:06:35.026692] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:43.867 [2024-11-26 19:06:35.026720] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:43.867 [2024-11-26 19:06:35.026777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:43.867 spare 00:19:43.867 [2024-11-26 19:06:35.042946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:43.867 19:06:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.867 19:06:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:43.867 [2024-11-26 19:06:35.045505] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.804 "name": "raid_bdev1", 00:19:44.804 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:44.804 "strip_size_kb": 0, 00:19:44.804 "state": "online", 00:19:44.804 "raid_level": "raid1", 00:19:44.804 "superblock": true, 00:19:44.804 "num_base_bdevs": 2, 00:19:44.804 "num_base_bdevs_discovered": 2, 00:19:44.804 "num_base_bdevs_operational": 2, 00:19:44.804 "process": { 00:19:44.804 "type": "rebuild", 00:19:44.804 "target": "spare", 00:19:44.804 "progress": { 00:19:44.804 "blocks": 2560, 00:19:44.804 "percent": 32 00:19:44.804 } 00:19:44.804 }, 00:19:44.804 "base_bdevs_list": [ 00:19:44.804 { 00:19:44.804 "name": "spare", 00:19:44.804 "uuid": "9a3f023e-eff2-5e56-8431-8ee2b1105439", 00:19:44.804 "is_configured": true, 00:19:44.804 "data_offset": 256, 00:19:44.804 "data_size": 7936 00:19:44.804 }, 00:19:44.804 { 00:19:44.804 "name": "BaseBdev2", 00:19:44.804 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:44.804 "is_configured": true, 00:19:44.804 "data_offset": 256, 00:19:44.804 "data_size": 7936 00:19:44.804 } 00:19:44.804 ] 00:19:44.804 }' 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.804 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.063 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.063 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:45.063 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.063 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.063 [2024-11-26 19:06:36.207757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.063 [2024-11-26 19:06:36.255207] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:45.063 [2024-11-26 19:06:36.255451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.063 [2024-11-26 19:06:36.255486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.063 [2024-11-26 19:06:36.255508] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:45.063 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.063 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.064 "name": "raid_bdev1", 00:19:45.064 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:45.064 "strip_size_kb": 0, 00:19:45.064 "state": "online", 00:19:45.064 "raid_level": "raid1", 00:19:45.064 "superblock": true, 00:19:45.064 "num_base_bdevs": 2, 00:19:45.064 "num_base_bdevs_discovered": 1, 00:19:45.064 "num_base_bdevs_operational": 1, 00:19:45.064 "base_bdevs_list": [ 00:19:45.064 { 00:19:45.064 "name": null, 00:19:45.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.064 "is_configured": false, 00:19:45.064 "data_offset": 0, 00:19:45.064 "data_size": 7936 00:19:45.064 }, 00:19:45.064 { 00:19:45.064 "name": "BaseBdev2", 00:19:45.064 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:45.064 "is_configured": true, 00:19:45.064 "data_offset": 256, 00:19:45.064 "data_size": 7936 00:19:45.064 } 00:19:45.064 ] 00:19:45.064 }' 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.064 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.636 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.636 "name": "raid_bdev1", 00:19:45.636 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:45.636 "strip_size_kb": 0, 00:19:45.636 "state": "online", 00:19:45.636 "raid_level": "raid1", 00:19:45.636 "superblock": true, 00:19:45.637 "num_base_bdevs": 2, 00:19:45.637 "num_base_bdevs_discovered": 1, 00:19:45.637 "num_base_bdevs_operational": 1, 00:19:45.637 "base_bdevs_list": [ 00:19:45.637 { 00:19:45.637 "name": null, 00:19:45.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.637 "is_configured": false, 00:19:45.637 "data_offset": 0, 00:19:45.637 "data_size": 7936 00:19:45.637 }, 00:19:45.637 { 00:19:45.637 "name": "BaseBdev2", 00:19:45.637 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:45.637 "is_configured": true, 00:19:45.637 "data_offset": 256, 00:19:45.637 "data_size": 7936 00:19:45.637 } 00:19:45.637 ] 00:19:45.637 }' 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.637 [2024-11-26 19:06:36.970876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:45.637 [2024-11-26 19:06:36.971203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.637 [2024-11-26 19:06:36.971425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:45.637 [2024-11-26 19:06:36.971570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.637 [2024-11-26 19:06:36.971939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.637 [2024-11-26 19:06:36.972055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.637 [2024-11-26 19:06:36.972145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:45.637 [2024-11-26 19:06:36.972165] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:45.637 [2024-11-26 19:06:36.972180] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:45.637 [2024-11-26 19:06:36.972193] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:45.637 BaseBdev1 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.637 19:06:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.015 19:06:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.015 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.015 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.015 "name": "raid_bdev1", 00:19:47.015 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:47.015 "strip_size_kb": 0, 00:19:47.015 "state": "online", 00:19:47.015 "raid_level": "raid1", 00:19:47.015 "superblock": true, 00:19:47.015 "num_base_bdevs": 2, 00:19:47.015 "num_base_bdevs_discovered": 1, 00:19:47.015 "num_base_bdevs_operational": 1, 00:19:47.015 "base_bdevs_list": [ 00:19:47.015 { 00:19:47.015 "name": null, 00:19:47.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.015 "is_configured": false, 00:19:47.015 "data_offset": 0, 00:19:47.015 "data_size": 7936 00:19:47.015 }, 00:19:47.015 { 00:19:47.015 "name": "BaseBdev2", 00:19:47.015 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:47.015 "is_configured": true, 00:19:47.015 "data_offset": 256, 00:19:47.015 "data_size": 7936 00:19:47.015 } 00:19:47.015 ] 00:19:47.015 }' 00:19:47.015 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.015 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.274 "name": "raid_bdev1", 00:19:47.274 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:47.274 "strip_size_kb": 0, 00:19:47.274 "state": "online", 00:19:47.274 "raid_level": "raid1", 00:19:47.274 "superblock": true, 00:19:47.274 "num_base_bdevs": 2, 00:19:47.274 "num_base_bdevs_discovered": 1, 00:19:47.274 "num_base_bdevs_operational": 1, 00:19:47.274 "base_bdevs_list": [ 00:19:47.274 { 00:19:47.274 "name": null, 00:19:47.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.274 "is_configured": false, 00:19:47.274 "data_offset": 0, 00:19:47.274 "data_size": 7936 00:19:47.274 }, 00:19:47.274 { 00:19:47.274 "name": "BaseBdev2", 00:19:47.274 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:47.274 "is_configured": true, 00:19:47.274 "data_offset": 256, 00:19:47.274 "data_size": 7936 00:19:47.274 } 00:19:47.274 ] 00:19:47.274 }' 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.274 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.534 [2024-11-26 19:06:38.671493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.534 [2024-11-26 19:06:38.671911] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:47.534 [2024-11-26 19:06:38.671948] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:47.534 request: 00:19:47.534 { 00:19:47.534 "base_bdev": "BaseBdev1", 00:19:47.534 "raid_bdev": "raid_bdev1", 00:19:47.534 "method": "bdev_raid_add_base_bdev", 00:19:47.534 "req_id": 1 00:19:47.534 } 00:19:47.534 Got JSON-RPC error response 00:19:47.534 response: 00:19:47.534 { 00:19:47.534 "code": -22, 00:19:47.534 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:47.534 } 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:47.534 19:06:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:48.469 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:48.469 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.469 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.470 "name": "raid_bdev1", 00:19:48.470 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:48.470 "strip_size_kb": 0, 00:19:48.470 "state": "online", 00:19:48.470 "raid_level": "raid1", 00:19:48.470 "superblock": true, 00:19:48.470 "num_base_bdevs": 2, 00:19:48.470 "num_base_bdevs_discovered": 1, 00:19:48.470 "num_base_bdevs_operational": 1, 00:19:48.470 "base_bdevs_list": [ 00:19:48.470 { 00:19:48.470 "name": null, 00:19:48.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.470 "is_configured": false, 00:19:48.470 "data_offset": 0, 00:19:48.470 "data_size": 7936 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "name": "BaseBdev2", 00:19:48.470 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:48.470 "is_configured": true, 00:19:48.470 "data_offset": 256, 00:19:48.470 "data_size": 7936 00:19:48.470 } 00:19:48.470 ] 00:19:48.470 }' 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.470 19:06:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.038 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.038 "name": "raid_bdev1", 00:19:49.038 "uuid": "d59a6c11-d6f7-4866-8943-966875c393d5", 00:19:49.038 "strip_size_kb": 0, 00:19:49.038 "state": "online", 00:19:49.039 "raid_level": "raid1", 00:19:49.039 "superblock": true, 00:19:49.039 "num_base_bdevs": 2, 00:19:49.039 "num_base_bdevs_discovered": 1, 00:19:49.039 "num_base_bdevs_operational": 1, 00:19:49.039 "base_bdevs_list": [ 00:19:49.039 { 00:19:49.039 "name": null, 00:19:49.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.039 "is_configured": false, 00:19:49.039 "data_offset": 0, 00:19:49.039 "data_size": 7936 00:19:49.039 }, 00:19:49.039 { 00:19:49.039 "name": "BaseBdev2", 00:19:49.039 "uuid": "417b2098-67c4-56f8-a898-66bd43ab5050", 00:19:49.039 "is_configured": true, 00:19:49.039 "data_offset": 256, 00:19:49.039 "data_size": 7936 00:19:49.039 } 00:19:49.039 ] 00:19:49.039 }' 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89563 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89563 ']' 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89563 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89563 00:19:49.039 killing process with pid 89563 00:19:49.039 Received shutdown signal, test time was about 60.000000 seconds 00:19:49.039 00:19:49.039 Latency(us) 00:19:49.039 [2024-11-26T19:06:40.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.039 [2024-11-26T19:06:40.406Z] =================================================================================================================== 00:19:49.039 [2024-11-26T19:06:40.406Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89563' 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89563 00:19:49.039 [2024-11-26 19:06:40.395373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.039 19:06:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89563 00:19:49.039 [2024-11-26 19:06:40.395519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.039 [2024-11-26 19:06:40.395592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.039 [2024-11-26 19:06:40.395610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:49.298 [2024-11-26 19:06:40.639174] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.677 ************************************ 00:19:50.677 END TEST raid_rebuild_test_sb_md_interleaved 00:19:50.677 ************************************ 00:19:50.677 19:06:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:50.677 00:19:50.677 real 0m18.747s 00:19:50.677 user 0m25.637s 00:19:50.677 sys 0m1.540s 00:19:50.677 19:06:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.677 19:06:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.677 19:06:41 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:50.677 19:06:41 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:50.677 19:06:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89563 ']' 00:19:50.677 19:06:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89563 00:19:50.677 19:06:41 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:50.677 00:19:50.677 real 13m10.053s 00:19:50.677 user 18m35.686s 00:19:50.677 sys 1m48.273s 00:19:50.677 19:06:41 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.677 19:06:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.677 ************************************ 00:19:50.677 END TEST bdev_raid 00:19:50.677 ************************************ 00:19:50.677 19:06:41 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:50.677 19:06:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:50.677 19:06:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.677 19:06:41 -- common/autotest_common.sh@10 -- # set +x 00:19:50.677 ************************************ 00:19:50.677 START TEST spdkcli_raid 00:19:50.677 ************************************ 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:50.677 * Looking for test storage... 00:19:50.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.677 19:06:41 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:50.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.677 --rc genhtml_branch_coverage=1 00:19:50.677 --rc genhtml_function_coverage=1 00:19:50.677 --rc genhtml_legend=1 00:19:50.677 --rc geninfo_all_blocks=1 00:19:50.677 --rc geninfo_unexecuted_blocks=1 00:19:50.677 00:19:50.677 ' 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:50.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.677 --rc genhtml_branch_coverage=1 00:19:50.677 --rc genhtml_function_coverage=1 00:19:50.677 --rc genhtml_legend=1 00:19:50.677 --rc geninfo_all_blocks=1 00:19:50.677 --rc geninfo_unexecuted_blocks=1 00:19:50.677 00:19:50.677 ' 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:50.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.677 --rc genhtml_branch_coverage=1 00:19:50.677 --rc genhtml_function_coverage=1 00:19:50.677 --rc genhtml_legend=1 00:19:50.677 --rc geninfo_all_blocks=1 00:19:50.677 --rc geninfo_unexecuted_blocks=1 00:19:50.677 00:19:50.677 ' 00:19:50.677 19:06:41 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:50.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.677 --rc genhtml_branch_coverage=1 00:19:50.677 --rc genhtml_function_coverage=1 00:19:50.677 --rc genhtml_legend=1 00:19:50.677 --rc geninfo_all_blocks=1 00:19:50.677 --rc geninfo_unexecuted_blocks=1 00:19:50.677 00:19:50.677 ' 00:19:50.677 19:06:41 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:50.678 19:06:41 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:50.678 19:06:41 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:50.678 19:06:41 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:50.678 19:06:41 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:50.678 19:06:41 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:50.678 19:06:41 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:50.678 19:06:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:50.678 19:06:42 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.678 19:06:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90244 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90244 00:19:50.678 19:06:42 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:50.678 19:06:42 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90244 ']' 00:19:50.678 19:06:42 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.678 19:06:42 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.678 19:06:42 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.678 19:06:42 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.678 19:06:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.937 [2024-11-26 19:06:42.148788] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:19:50.937 [2024-11-26 19:06:42.148981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90244 ] 00:19:51.195 [2024-11-26 19:06:42.324282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:51.195 [2024-11-26 19:06:42.455925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.195 [2024-11-26 19:06:42.455959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.133 19:06:43 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.133 19:06:43 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:52.133 19:06:43 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:52.133 19:06:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.133 19:06:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.133 19:06:43 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:52.133 19:06:43 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.133 19:06:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.133 19:06:43 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:52.133 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:52.133 ' 00:19:54.033 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:54.033 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:54.033 19:06:44 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:54.033 19:06:44 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.033 19:06:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.033 19:06:45 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:54.033 19:06:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.033 19:06:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.033 19:06:45 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:54.033 ' 00:19:54.968 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:54.968 19:06:46 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:54.968 19:06:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.968 19:06:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.968 19:06:46 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:54.968 19:06:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.968 19:06:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.968 19:06:46 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:54.968 19:06:46 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:55.556 19:06:46 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:55.556 19:06:46 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:55.556 19:06:46 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:55.556 19:06:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.556 19:06:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:55.556 19:06:46 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:55.556 19:06:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.556 19:06:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:55.556 19:06:46 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:55.556 ' 00:19:56.490 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:56.749 19:06:47 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:56.749 19:06:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:56.749 19:06:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:56.749 19:06:47 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:56.749 19:06:47 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:56.749 19:06:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:56.749 19:06:47 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:56.749 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:56.749 ' 00:19:58.123 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:58.123 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:58.381 19:06:49 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.381 19:06:49 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90244 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90244 ']' 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90244 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90244 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90244' 00:19:58.381 killing process with pid 90244 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90244 00:19:58.381 19:06:49 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90244 00:20:00.917 19:06:51 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:00.917 19:06:51 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90244 ']' 00:20:00.917 19:06:51 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90244 00:20:00.917 19:06:51 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90244 ']' 00:20:00.917 Process with pid 90244 is not found 00:20:00.917 19:06:51 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90244 00:20:00.917 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90244) - No such process 00:20:00.917 19:06:51 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90244 is not found' 00:20:00.917 19:06:51 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:00.917 19:06:51 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:00.917 19:06:51 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:00.917 19:06:51 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:00.917 00:20:00.917 real 0m10.011s 00:20:00.917 user 0m20.639s 00:20:00.917 sys 0m1.120s 00:20:00.918 19:06:51 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.918 19:06:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.918 ************************************ 00:20:00.918 END TEST spdkcli_raid 00:20:00.918 ************************************ 00:20:00.918 19:06:51 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:00.918 19:06:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:00.918 19:06:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.918 19:06:51 -- common/autotest_common.sh@10 -- # set +x 00:20:00.918 ************************************ 00:20:00.918 START TEST blockdev_raid5f 00:20:00.918 ************************************ 00:20:00.918 19:06:51 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:00.918 * Looking for test storage... 00:20:00.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:00.918 19:06:51 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:00.918 19:06:51 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:20:00.918 19:06:51 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.918 19:06:52 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:00.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.918 --rc genhtml_branch_coverage=1 00:20:00.918 --rc genhtml_function_coverage=1 00:20:00.918 --rc genhtml_legend=1 00:20:00.918 --rc geninfo_all_blocks=1 00:20:00.918 --rc geninfo_unexecuted_blocks=1 00:20:00.918 00:20:00.918 ' 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:00.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.918 --rc genhtml_branch_coverage=1 00:20:00.918 --rc genhtml_function_coverage=1 00:20:00.918 --rc genhtml_legend=1 00:20:00.918 --rc geninfo_all_blocks=1 00:20:00.918 --rc geninfo_unexecuted_blocks=1 00:20:00.918 00:20:00.918 ' 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:00.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.918 --rc genhtml_branch_coverage=1 00:20:00.918 --rc genhtml_function_coverage=1 00:20:00.918 --rc genhtml_legend=1 00:20:00.918 --rc geninfo_all_blocks=1 00:20:00.918 --rc geninfo_unexecuted_blocks=1 00:20:00.918 00:20:00.918 ' 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:00.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.918 --rc genhtml_branch_coverage=1 00:20:00.918 --rc genhtml_function_coverage=1 00:20:00.918 --rc genhtml_legend=1 00:20:00.918 --rc geninfo_all_blocks=1 00:20:00.918 --rc geninfo_unexecuted_blocks=1 00:20:00.918 00:20:00.918 ' 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90520 00:20:00.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90520 00:20:00.918 19:06:52 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90520 ']' 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.918 19:06:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:00.918 [2024-11-26 19:06:52.187455] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:20:00.918 [2024-11-26 19:06:52.187986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90520 ] 00:20:01.178 [2024-11-26 19:06:52.384495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.437 [2024-11-26 19:06:52.551654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.376 Malloc0 00:20:02.376 Malloc1 00:20:02.376 Malloc2 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "dc50c229-f004-46c0-b478-bddc65d2dded"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dc50c229-f004-46c0-b478-bddc65d2dded",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "dc50c229-f004-46c0-b478-bddc65d2dded",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "70f2488c-1dd4-46fd-8cab-3fccd02a0ae2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "dcaa60d5-c8b3-480e-9f0c-f662d4b14f85",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4590d934-2d82-4508-8f8c-f66d53119656",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:02.376 19:06:53 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90520 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90520 ']' 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90520 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.376 19:06:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90520 00:20:02.635 killing process with pid 90520 00:20:02.635 19:06:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.635 19:06:53 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.635 19:06:53 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90520' 00:20:02.635 19:06:53 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90520 00:20:02.635 19:06:53 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90520 00:20:05.169 19:06:56 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:05.169 19:06:56 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:05.169 19:06:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:05.169 19:06:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.169 19:06:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:05.169 ************************************ 00:20:05.169 START TEST bdev_hello_world 00:20:05.169 ************************************ 00:20:05.169 19:06:56 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:05.169 [2024-11-26 19:06:56.132808] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:20:05.169 [2024-11-26 19:06:56.133062] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90582 ] 00:20:05.169 [2024-11-26 19:06:56.319072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.169 [2024-11-26 19:06:56.442715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.737 [2024-11-26 19:06:56.952560] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:05.737 [2024-11-26 19:06:56.952807] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:05.737 [2024-11-26 19:06:56.952859] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:05.737 [2024-11-26 19:06:56.953526] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:05.737 [2024-11-26 19:06:56.953729] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:05.737 [2024-11-26 19:06:56.953761] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:05.737 [2024-11-26 19:06:56.953837] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:05.737 00:20:05.737 [2024-11-26 19:06:56.953866] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:07.113 00:20:07.113 real 0m2.184s 00:20:07.113 user 0m1.730s 00:20:07.113 sys 0m0.331s 00:20:07.113 ************************************ 00:20:07.113 END TEST bdev_hello_world 00:20:07.113 ************************************ 00:20:07.113 19:06:58 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.113 19:06:58 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:07.113 19:06:58 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:07.113 19:06:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:07.113 19:06:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.113 19:06:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:07.113 ************************************ 00:20:07.113 START TEST bdev_bounds 00:20:07.113 ************************************ 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:07.113 Process bdevio pid: 90624 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90624 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90624' 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90624 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90624 ']' 00:20:07.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.113 19:06:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:07.113 [2024-11-26 19:06:58.367654] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:20:07.113 [2024-11-26 19:06:58.367955] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90624 ] 00:20:07.372 [2024-11-26 19:06:58.559113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.372 [2024-11-26 19:06:58.683364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.372 [2024-11-26 19:06:58.683487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.372 [2024-11-26 19:06:58.683512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.309 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.309 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:08.309 19:06:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:08.309 I/O targets: 00:20:08.309 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:08.309 00:20:08.309 00:20:08.309 CUnit - A unit testing framework for C - Version 2.1-3 00:20:08.309 http://cunit.sourceforge.net/ 00:20:08.309 00:20:08.309 00:20:08.309 Suite: bdevio tests on: raid5f 00:20:08.309 Test: blockdev write read block ...passed 00:20:08.309 Test: blockdev write zeroes read block ...passed 00:20:08.309 Test: blockdev write zeroes read no split ...passed 00:20:08.309 Test: blockdev write zeroes read split ...passed 00:20:08.309 Test: blockdev write zeroes read split partial ...passed 00:20:08.309 Test: blockdev reset ...passed 00:20:08.309 Test: blockdev write read 8 blocks ...passed 00:20:08.309 Test: blockdev write read size > 128k ...passed 00:20:08.569 Test: blockdev write read invalid size ...passed 00:20:08.569 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:08.569 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:08.569 Test: blockdev write read max offset ...passed 00:20:08.569 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:08.569 Test: blockdev writev readv 8 blocks ...passed 00:20:08.569 Test: blockdev writev readv 30 x 1block ...passed 00:20:08.569 Test: blockdev writev readv block ...passed 00:20:08.569 Test: blockdev writev readv size > 128k ...passed 00:20:08.569 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:08.569 Test: blockdev comparev and writev ...passed 00:20:08.569 Test: blockdev nvme passthru rw ...passed 00:20:08.569 Test: blockdev nvme passthru vendor specific ...passed 00:20:08.569 Test: blockdev nvme admin passthru ...passed 00:20:08.569 Test: blockdev copy ...passed 00:20:08.569 00:20:08.569 Run Summary: Type Total Ran Passed Failed Inactive 00:20:08.569 suites 1 1 n/a 0 0 00:20:08.569 tests 23 23 23 0 0 00:20:08.569 asserts 130 130 130 0 n/a 00:20:08.569 00:20:08.569 Elapsed time = 0.543 seconds 00:20:08.569 0 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90624 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90624 ']' 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90624 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90624 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90624' 00:20:08.569 killing process with pid 90624 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90624 00:20:08.569 19:06:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90624 00:20:09.949 19:07:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:09.949 00:20:09.949 real 0m2.805s 00:20:09.949 user 0m6.940s 00:20:09.949 sys 0m0.457s 00:20:09.949 19:07:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.949 19:07:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:09.949 ************************************ 00:20:09.949 END TEST bdev_bounds 00:20:09.949 ************************************ 00:20:09.949 19:07:01 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:09.949 19:07:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:09.949 19:07:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.949 19:07:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:09.949 ************************************ 00:20:09.949 START TEST bdev_nbd 00:20:09.949 ************************************ 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90684 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90684 /var/tmp/spdk-nbd.sock 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90684 ']' 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:09.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.949 19:07:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:09.949 [2024-11-26 19:07:01.215751] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:20:09.949 [2024-11-26 19:07:01.216725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.208 [2024-11-26 19:07:01.392920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.208 [2024-11-26 19:07:01.527686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.776 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:11.036 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:11.296 1+0 records in 00:20:11.296 1+0 records out 00:20:11.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359253 s, 11.4 MB/s 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:11.296 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:11.556 { 00:20:11.556 "nbd_device": "/dev/nbd0", 00:20:11.556 "bdev_name": "raid5f" 00:20:11.556 } 00:20:11.556 ]' 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:11.556 { 00:20:11.556 "nbd_device": "/dev/nbd0", 00:20:11.556 "bdev_name": "raid5f" 00:20:11.556 } 00:20:11.556 ]' 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:11.556 19:07:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:11.837 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:12.096 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:12.096 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:12.096 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:12.096 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:12.096 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:12.096 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:12.356 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:12.356 /dev/nbd0 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:12.616 1+0 records in 00:20:12.616 1+0 records out 00:20:12.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347038 s, 11.8 MB/s 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:12.616 19:07:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:12.875 { 00:20:12.875 "nbd_device": "/dev/nbd0", 00:20:12.875 "bdev_name": "raid5f" 00:20:12.875 } 00:20:12.875 ]' 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:12.875 { 00:20:12.875 "nbd_device": "/dev/nbd0", 00:20:12.875 "bdev_name": "raid5f" 00:20:12.875 } 00:20:12.875 ]' 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:12.875 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:12.876 256+0 records in 00:20:12.876 256+0 records out 00:20:12.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00784216 s, 134 MB/s 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:12.876 256+0 records in 00:20:12.876 256+0 records out 00:20:12.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0383676 s, 27.3 MB/s 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.876 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:13.135 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:13.394 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:13.654 19:07:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:13.913 malloc_lvol_verify 00:20:13.913 19:07:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:14.172 c2e7df3a-361f-4ce5-a2e9-cd0a990283f5 00:20:14.172 19:07:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:14.430 f01bda97-a85a-4c85-83f9-f46d810c0b02 00:20:14.430 19:07:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:14.689 /dev/nbd0 00:20:14.689 19:07:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:14.689 19:07:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:14.689 19:07:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:14.689 19:07:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:14.689 19:07:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:14.689 mke2fs 1.47.0 (5-Feb-2023) 00:20:14.689 Discarding device blocks: 0/4096 done 00:20:14.689 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:14.689 00:20:14.689 Allocating group tables: 0/1 done 00:20:14.689 Writing inode tables: 0/1 done 00:20:14.689 Creating journal (1024 blocks): done 00:20:14.689 Writing superblocks and filesystem accounting information: 0/1 done 00:20:14.689 00:20:14.689 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:14.689 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:14.689 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:14.689 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:14.689 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:14.689 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:14.689 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:14.948 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:15.206 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90684 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90684 ']' 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90684 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90684 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.207 killing process with pid 90684 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90684' 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90684 00:20:15.207 19:07:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90684 00:20:16.582 19:07:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:16.582 00:20:16.582 real 0m6.541s 00:20:16.582 user 0m9.375s 00:20:16.582 sys 0m1.472s 00:20:16.582 19:07:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.582 19:07:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:16.582 ************************************ 00:20:16.582 END TEST bdev_nbd 00:20:16.582 ************************************ 00:20:16.582 19:07:07 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:16.582 19:07:07 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:16.582 19:07:07 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:16.582 19:07:07 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:16.582 19:07:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:16.582 19:07:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.582 19:07:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:16.582 ************************************ 00:20:16.582 START TEST bdev_fio 00:20:16.582 ************************************ 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:16.582 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:16.582 ************************************ 00:20:16.582 START TEST bdev_fio_rw_verify 00:20:16.582 ************************************ 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:16.582 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:16.583 19:07:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:16.841 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:16.841 fio-3.35 00:20:16.841 Starting 1 thread 00:20:29.063 00:20:29.063 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90890: Tue Nov 26 19:07:19 2024 00:20:29.063 read: IOPS=8482, BW=33.1MiB/s (34.7MB/s)(331MiB/10001msec) 00:20:29.063 slat (usec): min=20, max=157, avg=29.46, stdev= 7.77 00:20:29.063 clat (usec): min=13, max=480, avg=188.50, stdev=73.70 00:20:29.063 lat (usec): min=40, max=523, avg=217.96, stdev=74.99 00:20:29.063 clat percentiles (usec): 00:20:29.063 | 50.000th=[ 186], 99.000th=[ 351], 99.900th=[ 396], 99.990th=[ 433], 00:20:29.063 | 99.999th=[ 482] 00:20:29.063 write: IOPS=8931, BW=34.9MiB/s (36.6MB/s)(344MiB/9870msec); 0 zone resets 00:20:29.063 slat (usec): min=11, max=235, avg=23.47, stdev= 7.54 00:20:29.063 clat (usec): min=70, max=898, avg=428.41, stdev=67.26 00:20:29.063 lat (usec): min=88, max=1104, avg=451.88, stdev=69.20 00:20:29.063 clat percentiles (usec): 00:20:29.063 | 50.000th=[ 429], 99.000th=[ 586], 99.900th=[ 660], 99.990th=[ 791], 00:20:29.063 | 99.999th=[ 898] 00:20:29.063 bw ( KiB/s): min=30984, max=40912, per=98.32%, avg=35127.58, stdev=2490.31, samples=19 00:20:29.063 iops : min= 7746, max=10228, avg=8781.89, stdev=622.58, samples=19 00:20:29.063 lat (usec) : 20=0.01%, 50=0.01%, 100=6.91%, 250=30.62%, 500=55.25% 00:20:29.063 lat (usec) : 750=7.20%, 1000=0.01% 00:20:29.063 cpu : usr=98.39%, sys=0.68%, ctx=22, majf=0, minf=7389 00:20:29.063 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.063 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.063 issued rwts: total=84838,88157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.063 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:29.063 00:20:29.063 Run status group 0 (all jobs): 00:20:29.063 READ: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=331MiB (347MB), run=10001-10001msec 00:20:29.063 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=344MiB (361MB), run=9870-9870msec 00:20:29.322 ----------------------------------------------------- 00:20:29.322 Suppressions used: 00:20:29.322 count bytes template 00:20:29.322 1 7 /usr/src/fio/parse.c 00:20:29.322 686 65856 /usr/src/fio/iolog.c 00:20:29.322 1 8 libtcmalloc_minimal.so 00:20:29.322 1 904 libcrypto.so 00:20:29.322 ----------------------------------------------------- 00:20:29.322 00:20:29.322 ************************************ 00:20:29.322 END TEST bdev_fio_rw_verify 00:20:29.322 ************************************ 00:20:29.322 00:20:29.322 real 0m12.775s 00:20:29.322 user 0m12.974s 00:20:29.322 sys 0m0.742s 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:29.322 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "dc50c229-f004-46c0-b478-bddc65d2dded"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dc50c229-f004-46c0-b478-bddc65d2dded",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "dc50c229-f004-46c0-b478-bddc65d2dded",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "70f2488c-1dd4-46fd-8cab-3fccd02a0ae2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "dcaa60d5-c8b3-480e-9f0c-f662d4b14f85",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4590d934-2d82-4508-8f8c-f66d53119656",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:29.323 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:29.581 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:29.581 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:29.581 /home/vagrant/spdk_repo/spdk 00:20:29.581 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:29.581 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:29.581 19:07:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:29.581 00:20:29.581 real 0m13.006s 00:20:29.581 user 0m13.082s 00:20:29.581 sys 0m0.837s 00:20:29.581 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.581 ************************************ 00:20:29.581 END TEST bdev_fio 00:20:29.581 ************************************ 00:20:29.581 19:07:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:29.581 19:07:20 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:29.581 19:07:20 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:29.581 19:07:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:29.581 19:07:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.581 19:07:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:29.581 ************************************ 00:20:29.581 START TEST bdev_verify 00:20:29.581 ************************************ 00:20:29.581 19:07:20 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:29.581 [2024-11-26 19:07:20.901169] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:20:29.581 [2024-11-26 19:07:20.901346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91054 ] 00:20:29.840 [2024-11-26 19:07:21.094702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:30.099 [2024-11-26 19:07:21.257718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.099 [2024-11-26 19:07:21.257737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.666 Running I/O for 5 seconds... 00:20:32.540 12087.00 IOPS, 47.21 MiB/s [2024-11-26T19:07:24.842Z] 12650.00 IOPS, 49.41 MiB/s [2024-11-26T19:07:26.217Z] 13063.67 IOPS, 51.03 MiB/s [2024-11-26T19:07:27.153Z] 13244.25 IOPS, 51.74 MiB/s [2024-11-26T19:07:27.153Z] 13265.40 IOPS, 51.82 MiB/s 00:20:35.786 Latency(us) 00:20:35.786 [2024-11-26T19:07:27.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.786 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:35.786 Verification LBA range: start 0x0 length 0x2000 00:20:35.786 raid5f : 5.02 6599.07 25.78 0.00 0.00 29252.09 335.13 26691.03 00:20:35.786 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:35.786 Verification LBA range: start 0x2000 length 0x2000 00:20:35.786 raid5f : 5.03 6657.02 26.00 0.00 0.00 28904.23 169.43 24903.68 00:20:35.786 [2024-11-26T19:07:27.153Z] =================================================================================================================== 00:20:35.786 [2024-11-26T19:07:27.153Z] Total : 13256.09 51.78 0.00 0.00 29077.30 169.43 26691.03 00:20:37.162 00:20:37.162 real 0m7.424s 00:20:37.162 user 0m13.517s 00:20:37.162 sys 0m0.353s 00:20:37.162 19:07:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.162 19:07:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:37.162 ************************************ 00:20:37.162 END TEST bdev_verify 00:20:37.162 ************************************ 00:20:37.162 19:07:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:37.162 19:07:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:37.162 19:07:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.162 19:07:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:37.162 ************************************ 00:20:37.162 START TEST bdev_verify_big_io 00:20:37.162 ************************************ 00:20:37.162 19:07:28 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:37.162 [2024-11-26 19:07:28.364554] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:20:37.162 [2024-11-26 19:07:28.364913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91147 ] 00:20:37.420 [2024-11-26 19:07:28.542450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:37.420 [2024-11-26 19:07:28.671062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.420 [2024-11-26 19:07:28.671071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.986 Running I/O for 5 seconds... 00:20:40.305 630.00 IOPS, 39.38 MiB/s [2024-11-26T19:07:32.608Z] 727.50 IOPS, 45.47 MiB/s [2024-11-26T19:07:33.545Z] 760.67 IOPS, 47.54 MiB/s [2024-11-26T19:07:34.481Z] 761.00 IOPS, 47.56 MiB/s [2024-11-26T19:07:34.741Z] 761.60 IOPS, 47.60 MiB/s 00:20:43.374 Latency(us) 00:20:43.374 [2024-11-26T19:07:34.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.374 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:43.374 Verification LBA range: start 0x0 length 0x200 00:20:43.374 raid5f : 5.26 386.41 24.15 0.00 0.00 8169288.97 180.60 331731.32 00:20:43.374 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:43.374 Verification LBA range: start 0x200 length 0x200 00:20:43.374 raid5f : 5.31 382.03 23.88 0.00 0.00 8316976.19 187.11 352702.84 00:20:43.374 [2024-11-26T19:07:34.741Z] =================================================================================================================== 00:20:43.374 [2024-11-26T19:07:34.741Z] Total : 768.45 48.03 0.00 0.00 8243096.22 180.60 352702.84 00:20:44.796 ************************************ 00:20:44.796 END TEST bdev_verify_big_io 00:20:44.796 ************************************ 00:20:44.796 00:20:44.796 real 0m7.629s 00:20:44.796 user 0m14.032s 00:20:44.796 sys 0m0.321s 00:20:44.796 19:07:35 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.796 19:07:35 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:44.796 19:07:35 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:44.796 19:07:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:44.796 19:07:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.796 19:07:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.796 ************************************ 00:20:44.796 START TEST bdev_write_zeroes 00:20:44.796 ************************************ 00:20:44.796 19:07:35 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:44.796 [2024-11-26 19:07:36.061810] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:20:44.796 [2024-11-26 19:07:36.062340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91246 ] 00:20:45.055 [2024-11-26 19:07:36.249358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.055 [2024-11-26 19:07:36.376011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.624 Running I/O for 1 seconds... 00:20:46.561 21183.00 IOPS, 82.75 MiB/s 00:20:46.561 Latency(us) 00:20:46.561 [2024-11-26T19:07:37.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.561 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:46.562 raid5f : 1.01 21161.92 82.66 0.00 0.00 6026.85 1995.87 8638.84 00:20:46.562 [2024-11-26T19:07:37.929Z] =================================================================================================================== 00:20:46.562 [2024-11-26T19:07:37.929Z] Total : 21161.92 82.66 0.00 0.00 6026.85 1995.87 8638.84 00:20:47.941 00:20:47.941 real 0m3.225s 00:20:47.941 user 0m2.777s 00:20:47.941 sys 0m0.315s 00:20:47.941 19:07:39 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.941 19:07:39 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:47.941 ************************************ 00:20:47.941 END TEST bdev_write_zeroes 00:20:47.941 ************************************ 00:20:47.941 19:07:39 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:47.942 19:07:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:47.942 19:07:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.942 19:07:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:47.942 ************************************ 00:20:47.942 START TEST bdev_json_nonenclosed 00:20:47.942 ************************************ 00:20:47.942 19:07:39 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:48.243 [2024-11-26 19:07:39.323317] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:20:48.243 [2024-11-26 19:07:39.323739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91299 ] 00:20:48.243 [2024-11-26 19:07:39.499065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.524 [2024-11-26 19:07:39.625279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.524 [2024-11-26 19:07:39.625586] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:48.524 [2024-11-26 19:07:39.625760] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:48.525 [2024-11-26 19:07:39.625880] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:48.525 00:20:48.525 real 0m0.646s 00:20:48.525 user 0m0.405s 00:20:48.525 sys 0m0.134s 00:20:48.525 19:07:39 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.525 ************************************ 00:20:48.525 END TEST bdev_json_nonenclosed 00:20:48.525 ************************************ 00:20:48.525 19:07:39 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:48.783 19:07:39 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:48.783 19:07:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:48.783 19:07:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.783 19:07:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:48.783 ************************************ 00:20:48.783 START TEST bdev_json_nonarray 00:20:48.783 ************************************ 00:20:48.783 19:07:39 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:48.783 [2024-11-26 19:07:40.021602] Starting SPDK v25.01-pre git sha1 658cb4c04 / DPDK 24.03.0 initialization... 00:20:48.783 [2024-11-26 19:07:40.021951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91330 ] 00:20:49.043 [2024-11-26 19:07:40.195231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.043 [2024-11-26 19:07:40.353724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.043 [2024-11-26 19:07:40.353886] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:49.043 [2024-11-26 19:07:40.353990] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:49.043 [2024-11-26 19:07:40.354021] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:49.302 00:20:49.302 real 0m0.680s 00:20:49.302 user 0m0.431s 00:20:49.302 sys 0m0.142s 00:20:49.302 ************************************ 00:20:49.302 END TEST bdev_json_nonarray 00:20:49.302 ************************************ 00:20:49.302 19:07:40 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.302 19:07:40 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:49.302 19:07:40 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:49.302 00:20:49.302 real 0m48.811s 00:20:49.302 user 1m6.552s 00:20:49.302 sys 0m5.407s 00:20:49.302 19:07:40 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.302 19:07:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:49.302 ************************************ 00:20:49.302 END TEST blockdev_raid5f 00:20:49.302 ************************************ 00:20:49.562 19:07:40 -- spdk/autotest.sh@194 -- # uname -s 00:20:49.562 19:07:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:49.562 19:07:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:49.562 19:07:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:49.562 19:07:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:49.562 19:07:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.562 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:20:49.562 19:07:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:49.562 19:07:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:49.562 19:07:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:49.562 19:07:40 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:49.562 19:07:40 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:49.562 19:07:40 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:49.562 19:07:40 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:49.562 19:07:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.562 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:20:49.562 19:07:40 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:49.562 19:07:40 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:49.562 19:07:40 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:49.562 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:20:51.470 INFO: APP EXITING 00:20:51.470 INFO: killing all VMs 00:20:51.470 INFO: killing vhost app 00:20:51.470 INFO: EXIT DONE 00:20:51.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:51.470 Waiting for block devices as requested 00:20:51.470 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:51.470 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:52.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:52.409 Cleaning 00:20:52.409 Removing: /var/run/dpdk/spdk0/config 00:20:52.409 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:52.409 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:52.409 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:52.409 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:52.409 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:52.409 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:52.409 Removing: /dev/shm/spdk_tgt_trace.pid56825 00:20:52.409 Removing: /var/run/dpdk/spdk0 00:20:52.409 Removing: /var/run/dpdk/spdk_pid56584 00:20:52.409 Removing: /var/run/dpdk/spdk_pid56825 00:20:52.409 Removing: /var/run/dpdk/spdk_pid57054 00:20:52.409 Removing: /var/run/dpdk/spdk_pid57158 00:20:52.409 Removing: /var/run/dpdk/spdk_pid57214 00:20:52.409 Removing: /var/run/dpdk/spdk_pid57342 00:20:52.409 Removing: /var/run/dpdk/spdk_pid57365 00:20:52.409 Removing: /var/run/dpdk/spdk_pid57570 00:20:52.409 Removing: /var/run/dpdk/spdk_pid57687 00:20:52.409 Removing: /var/run/dpdk/spdk_pid57794 00:20:52.409 Removing: /var/run/dpdk/spdk_pid57916 00:20:52.409 Removing: /var/run/dpdk/spdk_pid58024 00:20:52.409 Removing: /var/run/dpdk/spdk_pid58069 00:20:52.409 Removing: /var/run/dpdk/spdk_pid58100 00:20:52.409 Removing: /var/run/dpdk/spdk_pid58176 00:20:52.409 Removing: /var/run/dpdk/spdk_pid58293 00:20:52.409 Removing: /var/run/dpdk/spdk_pid58768 00:20:52.409 Removing: /var/run/dpdk/spdk_pid58845 00:20:52.409 Removing: /var/run/dpdk/spdk_pid58919 00:20:52.409 Removing: /var/run/dpdk/spdk_pid58935 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59086 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59108 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59261 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59283 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59347 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59371 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59442 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59460 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59655 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59697 00:20:52.409 Removing: /var/run/dpdk/spdk_pid59786 00:20:52.409 Removing: /var/run/dpdk/spdk_pid61158 00:20:52.409 Removing: /var/run/dpdk/spdk_pid61375 00:20:52.409 Removing: /var/run/dpdk/spdk_pid61525 00:20:52.409 Removing: /var/run/dpdk/spdk_pid62175 00:20:52.409 Removing: /var/run/dpdk/spdk_pid62392 00:20:52.409 Removing: /var/run/dpdk/spdk_pid62538 00:20:52.409 Removing: /var/run/dpdk/spdk_pid63196 00:20:52.409 Removing: /var/run/dpdk/spdk_pid63534 00:20:52.409 Removing: /var/run/dpdk/spdk_pid63681 00:20:52.409 Removing: /var/run/dpdk/spdk_pid65099 00:20:52.409 Removing: /var/run/dpdk/spdk_pid65358 00:20:52.409 Removing: /var/run/dpdk/spdk_pid65511 00:20:52.409 Removing: /var/run/dpdk/spdk_pid66925 00:20:52.409 Removing: /var/run/dpdk/spdk_pid67189 00:20:52.409 Removing: /var/run/dpdk/spdk_pid67329 00:20:52.409 Removing: /var/run/dpdk/spdk_pid68756 00:20:52.669 Removing: /var/run/dpdk/spdk_pid69207 00:20:52.669 Removing: /var/run/dpdk/spdk_pid69353 00:20:52.669 Removing: /var/run/dpdk/spdk_pid70871 00:20:52.669 Removing: /var/run/dpdk/spdk_pid71138 00:20:52.669 Removing: /var/run/dpdk/spdk_pid71288 00:20:52.669 Removing: /var/run/dpdk/spdk_pid72809 00:20:52.669 Removing: /var/run/dpdk/spdk_pid73076 00:20:52.669 Removing: /var/run/dpdk/spdk_pid73222 00:20:52.669 Removing: /var/run/dpdk/spdk_pid74735 00:20:52.669 Removing: /var/run/dpdk/spdk_pid75228 00:20:52.669 Removing: /var/run/dpdk/spdk_pid75379 00:20:52.669 Removing: /var/run/dpdk/spdk_pid75523 00:20:52.669 Removing: /var/run/dpdk/spdk_pid75980 00:20:52.669 Removing: /var/run/dpdk/spdk_pid76742 00:20:52.669 Removing: /var/run/dpdk/spdk_pid77126 00:20:52.669 Removing: /var/run/dpdk/spdk_pid77832 00:20:52.669 Removing: /var/run/dpdk/spdk_pid78306 00:20:52.669 Removing: /var/run/dpdk/spdk_pid79100 00:20:52.669 Removing: /var/run/dpdk/spdk_pid79539 00:20:52.669 Removing: /var/run/dpdk/spdk_pid81546 00:20:52.669 Removing: /var/run/dpdk/spdk_pid81996 00:20:52.669 Removing: /var/run/dpdk/spdk_pid82445 00:20:52.669 Removing: /var/run/dpdk/spdk_pid84571 00:20:52.669 Removing: /var/run/dpdk/spdk_pid85068 00:20:52.669 Removing: /var/run/dpdk/spdk_pid85578 00:20:52.669 Removing: /var/run/dpdk/spdk_pid86656 00:20:52.669 Removing: /var/run/dpdk/spdk_pid86984 00:20:52.669 Removing: /var/run/dpdk/spdk_pid87944 00:20:52.669 Removing: /var/run/dpdk/spdk_pid88274 00:20:52.669 Removing: /var/run/dpdk/spdk_pid89229 00:20:52.669 Removing: /var/run/dpdk/spdk_pid89563 00:20:52.669 Removing: /var/run/dpdk/spdk_pid90244 00:20:52.669 Removing: /var/run/dpdk/spdk_pid90520 00:20:52.669 Removing: /var/run/dpdk/spdk_pid90582 00:20:52.669 Removing: /var/run/dpdk/spdk_pid90624 00:20:52.669 Removing: /var/run/dpdk/spdk_pid90879 00:20:52.669 Removing: /var/run/dpdk/spdk_pid91054 00:20:52.669 Removing: /var/run/dpdk/spdk_pid91147 00:20:52.669 Removing: /var/run/dpdk/spdk_pid91246 00:20:52.669 Removing: /var/run/dpdk/spdk_pid91299 00:20:52.669 Removing: /var/run/dpdk/spdk_pid91330 00:20:52.669 Clean 00:20:52.669 19:07:43 -- common/autotest_common.sh@1453 -- # return 0 00:20:52.669 19:07:43 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:52.669 19:07:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.669 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:20:52.669 19:07:43 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:52.669 19:07:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.669 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:20:52.669 19:07:44 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:52.669 19:07:44 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:52.669 19:07:44 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:52.991 19:07:44 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:52.991 19:07:44 -- spdk/autotest.sh@398 -- # hostname 00:20:52.991 19:07:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:52.991 geninfo: WARNING: invalid characters removed from testname! 00:21:19.536 19:08:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:22.137 19:08:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:24.668 19:08:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:27.201 19:08:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:29.734 19:08:20 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:32.266 19:08:23 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:34.797 19:08:26 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:34.797 19:08:26 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:34.797 19:08:26 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:34.797 19:08:26 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:34.797 19:08:26 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:34.797 19:08:26 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:35.055 + [[ -n 5213 ]] 00:21:35.055 + sudo kill 5213 00:21:35.065 [Pipeline] } 00:21:35.082 [Pipeline] // timeout 00:21:35.087 [Pipeline] } 00:21:35.102 [Pipeline] // stage 00:21:35.107 [Pipeline] } 00:21:35.123 [Pipeline] // catchError 00:21:35.134 [Pipeline] stage 00:21:35.138 [Pipeline] { (Stop VM) 00:21:35.154 [Pipeline] sh 00:21:35.436 + vagrant halt 00:21:38.742 ==> default: Halting domain... 00:21:44.035 [Pipeline] sh 00:21:44.362 + vagrant destroy -f 00:21:47.651 ==> default: Removing domain... 00:21:47.663 [Pipeline] sh 00:21:47.944 + mv output /var/jenkins/workspace/raid-vg-autotest_3/output 00:21:47.953 [Pipeline] } 00:21:47.968 [Pipeline] // stage 00:21:47.973 [Pipeline] } 00:21:47.987 [Pipeline] // dir 00:21:47.993 [Pipeline] } 00:21:48.007 [Pipeline] // wrap 00:21:48.014 [Pipeline] } 00:21:48.027 [Pipeline] // catchError 00:21:48.036 [Pipeline] stage 00:21:48.038 [Pipeline] { (Epilogue) 00:21:48.051 [Pipeline] sh 00:21:48.328 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:55.009 [Pipeline] catchError 00:21:55.011 [Pipeline] { 00:21:55.024 [Pipeline] sh 00:21:55.305 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:55.305 Artifacts sizes are good 00:21:55.315 [Pipeline] } 00:21:55.329 [Pipeline] // catchError 00:21:55.344 [Pipeline] archiveArtifacts 00:21:55.353 Archiving artifacts 00:21:55.454 [Pipeline] cleanWs 00:21:55.467 [WS-CLEANUP] Deleting project workspace... 00:21:55.467 [WS-CLEANUP] Deferred wipeout is used... 00:21:55.474 [WS-CLEANUP] done 00:21:55.478 [Pipeline] } 00:21:55.493 [Pipeline] // stage 00:21:55.498 [Pipeline] } 00:21:55.511 [Pipeline] // node 00:21:55.516 [Pipeline] End of Pipeline 00:21:55.552 Finished: SUCCESS